Hacker News new | past | comments | ask | show | jobs | submit login

I think a more accurate characterization is that neither code points nor grapheme clusters are usually what you want, but when you're naively processing text it's usually better to go with grapheme clusters so you don't mess up _as_ badly :)

There are definitely some operations that make sense on code points: but if you go through your list, (1), (2), (4) is something you'll rarely implement yourself (you just need a library), (3) is ... kinda rare? The most common valid use case for dealing with code points is parsing, where the grammar is defined in ascii or in terms of code points (which is relatively common).

Treating strings as either code points or graphemes basically enshrines the assumptions that segmentation operations make sense on strings at all -- they only do in specific contexts.

Most string operations you can think of come from incorrect assumptions about text. Like you said, the answer to most questions of the form "how do I X a string" is "wrong question" (reversing a string is my favorite example of this).

The only string operation that universally makes sense is concatenation (when dealing with "valid" strings, i.e. strings that actually make sense and don't do silly things like starting with a stray modifier character). Replacement makes some sense but you have to define "replacement" better based on your context. Taking substrings makes sense but typically only if you already have some metric of validity for the substring -- either that the substring was an ingredient of a prior concatenation, or that you have a defined text format like HTML that lets you parse out substrings. (This is why i actually kinda agree with Rust's decision to use bytes rather than code points for indexing strings -- if you're doing it right you should have obtained the offset from an operation on the string anyway, so it doesn't matter how you index it, so pick the fast one)

Most string operations go downhill from here where there's usually a right thing to do for that operation but it's highly context dependent.

Even hashing and equality are context-dependent, sometimes comparing bytes is enough, but other times you want to NFC or something and it gets messy quickly :)

In the midst of all this, grapheme clusters + NFC (what Swift does) are abstractions that let you naively deal with strings and mess up less. Your algorithm will still be wrong, but its incorrectness will cause fewer problems.

But yeah, you're absolutely right that grapheme clusters are pretty niche for when they're the correct tool to reach for. I'd just like to add that they're often the less blatantly incorrect tool to reach for :)

> (I'm not sure where "cut the string down to 5 characters because we're out of display room" falls in this list. I suspect the actual answer is "wrong question, think about the problem differently").

This is true, and not thinking about the problem differently is what caused the iOS Arabic text crash last year.

For many if not most scripts fewer code points is not a guarantee of shorter size -- you can even get this in Latin if you have a font with some wild kerning -- it's just that this is much easier to trigger in Arabic since you have some letters that have tiny medial forms but big final forms.




There's a very sound argument to be made for the opposite conclusion, that if we care about a problem we should make it necessary to solve the problem correctly or else stuff very obviously breaks, not have broken systems seem like they kinda work until they're used in anger.

Outside of MySQL (which unaccountably had a weird MySQL-only character encoding which only covered the BMP and named it "utf8" then when you tried to shove actual UTF-8 strings into it, they'd get silently truncated because YOLO MySQL) UTF-8 implementations tended to handle the other planes much better than UTF-16 implementations, many of which were in practice UCS-2 and then some thin excuses. Why? Because if you didn't handle multiple code units in UTF-8 nothing worked, you couldn't even write some English words like café properly. For years pretending your UCS-2 code was UTF-16 would only be noticed by people using obscure writing systems or academics.

I am also reminded of approaches to i18n for software primarily developed and tested mainly by monolingual English speakers. Obviously these users won't know if a localised variant they're examining is correctly translated, but they can be given a fake "locale" in which translated text is visibly different in some consistent way, e.g. it has been "flipped" upside down by abusing symbols that look kind of like the Latin alphabet upside down, or Pig Latin is used "Openway Ocumentday". The idea here again is that problems are obvious rather than corner cases, if the translations are broken or missing it'll say "Open Document" in the test locale which is "wrong" and you don't need to wait for a specialist German-speaking tester to point that out.


> There's a very sound argument to be made for the opposite conclusion, that if we care about a problem we should make it necessary to solve the problem correctly or else stuff very obviously breaks, not have broken systems seem like they kinda work until they're used in anger.

Oh, definitely :)

I'm rationalizing the focus on grapheme clusters, if I had my way "what is a string" would be a mandatory unit of programming language education and reasoning about this would be more strongly enforced by programming languages.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: