One of the major difficulties with Unicode handling is not just that there are poor implementations out there with legacy baggage, but a lot of poor advice as well (or well-meaning advice that seems correct, but misses some corner case or some language). For example, this article wants to count "graphemes", and the author goes through three versions of an algorithm to account for surrogate pairs and various combining marks. All seems well in the test cases the author shows, but combining marks are only one class of codepoints that can join to form a grapheme, and the algorithm will fail for other grapheme clusters such as 'நி' (Tamil letter NA + Tamil Vowel Sign I), or Hangul made of conjoining Jamo (such as '깍': 'ᄁ' + 'ᅡ' + 'ᆨ'), or other control characters.
Luckily, the Unicode Technical Committee has figured this out for you, and UAX#29 provides an algorithm for determining grapheme cluster boundaries [1]. Yes, it's long and technical, it has many cases (and exceptions) to handle, and it can't be expressed compactly in two lines of JavaScript; but it will give you a well-defined and understood answer for all scripts in Unicode.
One thing I never understand is why it is so important to count graphemes.
I read dozens (hundreds?) of Unicode-related blogpost for many different languages, with long debates and discussions about the hurdles of counting graphemes, but they always forget to explain why one should need it; it's just assumed that it's important or interesting. This specific post just says: "Let's say you want to count the number of symbols in a given string, for example. How would you go about it?" and then go into a multi-page explanation, which is even incomplete (as you correctly noticed).
I can't remember many cases in which it's been useful to count graphemes, in my programming activity. I usually need to either:
1) count the number of bytes of the Unicode encoding I'm using / going to use, for the purpose of low-level stuff like buffers/sockets/memory/etc.
2) ask a graphic library to tell me how big the string will be on the screen, in pixels (with the given fonts, layout, hints, and whatnot).
Counting graphemes only sounds useful for things like command-line terminal; e.g.: if I were to make a command-line user interface (ala getopt()) which automatically wordwraps text in the usage screen at the 80-th column, I would need to count graphemes, in the unlikely case I had to support Tamil or Korean for such a specialistic case.
tl;dr: counting grapheme is a very complicated problem you probably don't need to ever solve.
Counting graphemes may be over-used, but needing to know their boundaries is important (and leads naturally to counting). For example, when you hit "delete" in a text editor, you'll probably want it to delete whole graphemes (and similarly for text selection); if you're doing text truncation, you may measure it by pixels, but you'll want to chop off the excess bytes at a grapheme boundary.
in the unlikely case I had to support Tamil or Korean for such a specialistic case.
Why is it "unlikely" that you would want your software to support users of other languages?
In the case of a delete action in a text editor, are you sure that deleting the whole grapheme is actually what the Tamil or Korean user wants?
You mentioned the following examples in your grandparent post:
- 'நி' (Tamil letter NA + Tamil Vowel Sign I)
- Hangul made of conjoining Jamo (such as '깍': 'ᄁ' + 'ᅡ' + 'ᆨ')
I don't speak either language, but it doesn't seem unreasonable to me that pressing Delete would delete just the vowel sign in Tamil, or just the last component within the Hangul character. In fact, that might be just what the user wants?
> I don't speak either language, but it doesn't seem unreasonable to me that pressing Delete would delete just the vowel sign in Tamil, or just the last component within the Hangul character. In fact, that might be just what the user wants?
My Korean is pretty poor, but I think that's exactly what one wants. If you mistype a letter, you want to retype that letter, not the whole syllable. However, this should work uniformly: it shouldn't matter if the syllable is represented as a single codepoint or made up of comjoining jamo.
If the Hangul and Tamil constructs are anything like ligatures (e.g. fi in the Latin alphabet), I would imagine that's the case most of the time. Plus lots of special rules for which glyphs to treat as single symbols and which to decompose (e.g. & is technically a ligature but almost never decomposed).
are you sure that deleting the whole grapheme is actually what the Tamil or Korean user wants?
I'm not, but I think it's the only sane thing for a text editor to do if you don't want it to incorporate a ton of language-specific rules. The UAX actually does make a distinction between "legacy" and "extended" grapheme clusters---if you're handing "delete", you'll want to use "legacy clusters" to separate the two Tamil marks; but for text selection, use "extended clusters" will combine them (it's a little bit more complicated than that, but there are properties of Unicode that allow you to handle the "preferred" method for editing a script, while remaining mostly language-agnostic).
Hangul is trickier, but input happens through an IME that "composes" the characters before they are committed to the editor.
The IME will perform component-wise deletion, but once it's committed, the editor will operate on the grapheme. It's not a perfect solution, but keeping the composition/decomposition rules for the language in the IME seems preferable.
> Why is it "unlikely" that you would want your software to support users of other languages?
I was specifically referring to the use case of translating a command-line usage text (ala --help). I'd assume that translating that in Tamil is not exactly common (statistically speaking), or otherwise all getopt()-like libraries would already support this for me.
Discussions of unicode often centre around the issue of counting symbols/graphemes/bytes/etc. and I often wonder what the use case is for counting anything other than either the number of bytes (for storage) or the size in device units of the output text from a rendering engine (for display) is. All the options between seem like pure exercise.
The reality seems to be that the 'size' of text is entirely dependent on context and even forward thinking articles on the subject seem to get hung up on counting things that don't matter.
If you're truncating by character (or WHATEVER) counts, you are guaranteed to be doing it wrong - maybe not in your native language, but in somebody's.
If it's storage space, you truncate by bytes, rounding down to the nearest complete grapheme- no need to count graphemes.
If it's display space, truncate by pixels, in which case you need "size in device units of the output text from a rendering engine". Again, no need to count graphemes.
I'm trying to imagine a use case for grapheme-wise truncate that wouldn't be better served by counting bytes, code points, pixels, or some sort of localized letter-counting convention.
The article mentions several examples of real-world issues (in popular libraries etc.) that stem from this behavior. Writing a regular expression to match a single symbol sure seems easy until you realize some characters have a length of `2`, not `1`, in JavaScript (and some other languages).
These are less JavaScript problems than utf-16 problems. The whole one character is not a code point problem. It's common to Java, .net, basically all of windows, and anything else that uses utf-16 strings. The solution is easy. If you need a one to one mapping of code points to characters convert to utf32 first. Utf8 has the same problems, the only difference is people know characters and code points don't match up. Whereas with utf16 there's a bunch of people who are either new or should never have been programmers to begin with that are clueless about it. Sadly this number is so large that just about any program that uses utf-16 strings is broken for inputs where code points != characters. This is partly the fault of the languages and libraries which give you functions like substring, reverse, etc on utf-16 strings, where they basically have no consistent meaning. It should have been a storage format not a manipulation format.
">These are less JavaScript problems than utf-16 problems"
The issues related to combining marks are not UTF-16 problems and are not solved by converting to codepoints.
Also, Java as well as many other UTF-16 based languages have much better unicode support than JavaScript (like access to codepoints and unicode character classes in regular expressions).
As always, if something can be done in a sloppy broken way JavaScript will take advantage of it to the fullest.
The article makes it a point that substrings, reversing text (when does anyone ever do that, actually, apart from coding exercises?) don't even work naïvely when you consider UTF-32. Yes, the code unit / code point dichotomy is annoying and plenty of people don't know about it, but there are many more pitfalls in Unicode when you don't know what you do.
If you have an easy solution of deprecating UTF-16 everywhere where it's used (while not breaking anything that currently works), I'm all ears. Unicode is a pragmatic, not a perfect, standard and there are historical mistakes. But for better or for worse they exist and will probably stay.
Technically, they're not even UTF-16 problems they're extended UCS-2 problems (aka UTF-16-treated-as-UCS-2-with-surrogate-pairs). Logically, a UTF-16 interface wouldn't expose code units first and foremost.
There's a fantastic, in-depth article on Unicode and NSString (the default string class in Objective-C) that was published a couple days ago, which covered a lot of the same material but from an Objective-C standpoint instead.
About Unicode in JS and other languages it is still worth to read "Unicode Support Shootout: The Good, the Bad, the Mostly Ugly" by Tom Christiansen [1].
One other thing to watch out for - if you're using the sort of regexes the author suggests, be VERY careful about any minificiation / uglification steps. I recently had to chase down an issue where uglify was replacing Unicode escapes with literal characters, causing strange "Invalid regular expression: Range out of order in character class" errors on load.
Luckily, the Unicode Technical Committee has figured this out for you, and UAX#29 provides an algorithm for determining grapheme cluster boundaries [1]. Yes, it's long and technical, it has many cases (and exceptions) to handle, and it can't be expressed compactly in two lines of JavaScript; but it will give you a well-defined and understood answer for all scripts in Unicode.
[1] http://www.unicode.org/reports/tr29/#Grapheme_Cluster_Bounda...