Hacker News new | past | comments | ask | show | jobs | submit login

>character-equivalent for purposes of word length

Serious question: why would you want to do this?

I know it's fashionable to limit usernames to X characters... but why? The main reason I've seen has been to limit the rendered length so there are some mostly-reliable UI patterns that don't need to worry about overflows or multiple lines. At least until someone names themselves:

W W W W W W W W W W W W W W W W W W W W

Which is 20 characters, no spaces, and will break loads of things.

(I'm intentionally ignoring "db column size" because that depends on your encoding, so it's unrelated to graphemes)




Serious question: why would you want to do this?

Have you never, in your entire life, encountered a string data type with a length rule? All sorts of ID values (to take an obvious example) either have fixed length, or a set of fixed lengths such that every valid value is one of those lengths, and many are alphanumeric, meaning you cannot get round length checks by trying to treat them as integers. Validating/understanding these values also often requires identifying what code point, not what grapheme, is at a specific index.

Plus there are things like parsing algorithms for standard formats. To take another example: you know how people sometimes repost the Stack Overflow question asking why "chucknorris" turns into a reddish color when used as a CSS color value? HTML5 provides an algorithm for parsing a (string) color declaration and turning it into a 24-bit RGB color value. That algorithm requires, at times, checking the length in code points of the string, and identifying the values of code points at specific indices. A language which forbids those operations cannot implement the HTML5 color parsing algorithm (through string handling; you'd instead have to do something like turn the string into a sequence of ints corresponding to the code points, and then manually manage everything, and why do that to yourself?).


Yes. All instances I've seen have been due to byte-size restrictions (so it depends on encoding) or for visual reasons (based on fundamentally flawed assumptions). With exceptions for dubious science around word-lengths between languages / as a difficulty/intelligence proxy, or just having fun identifying patterns. (interesting, absolutely, but of questionable utility / bias)

But every example you've given have been around visuals, byte sizes, or code points (which are unambiguously useful, yes). Nothing about graphemes.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: