What Ruby 1.9 gets absolutely right is that its String implementation is completely encoding agnostic (by which I specifically mean that it doesn't force your data to be encoded in a particular way). There are encodings for which there is no safe UTF-8 roundtrip (you can successfully convert the data to UTF-8 nicely, but when you convert back to UTF-8 to that encoding, you won't get the original input back; you'll get a slightly different output).
Rubyists in Japan don't have the luxury of dealing with Unicode all the time; they still get lots of data in ShiftJIS and other encodings. (The same is true of Rubyists elsewhere, but since US-ASCII is a proper subset of UTF-8, most folks don't know the difference; Win1252 is a pain in the ass, though.) If you have to do ANY work with older data formats, you curse languages that force you to use UTF-8 all the time instead of letting you work with the native data.
Most developers don't think about i18n nearly enough in any case; there's a lot more to worry about that simply using Unicode doesn't solve for you. Even the developers of Ruby have to worry about the fact that LATIN SMALL LETTER E WITH ACUTE (U+00E9) is the same as LATIN SMALL LETTER E (U+0065) COMBINING ACUTE ACCENT (U+0301); it doesn't begin to address the capitalization of 'ß' ('SS', which isn't necessarily reversible) or that in Turkish 'ı' capitalizes to 'I', but 'i' capitalizes to 'İ'. Don't EVEN get me started on number formatting...
EDIT: Added the last paragraph.
And, if you've got loads of data in an encoding that doesn't roundtrip, it's hardly an edge case.
Ruby's implementation is supposed to be such that if you want UTF-8 support and know that your (text) inputs and outputs are always going to be UTF-8, you never have to think anything differently than you did in Ruby 1.8. If it isn't working that way, then I think there's a bug.
Many of the issues I've dealt with when data mining involved mixed encodings within the same document, documents labelled with the wrong encoding in the metadata, and documents with no encoding information. There's only so much you can do as far as sniffing character sets and languages to avoid mojibake and other, more subtle, problems.
For my purposes, converting to Unicode and losing round-tripping is only a minor concern, whereas dealing with non-Unicode encodings is often a source of major problems.
So, personally, having worked both in languages that deal with strings by converting them internally to Unicode, and ones that treat them as encoding-tagged byte streams, I definitely favor the ones that deal with them as Unicode. But, my purposes aren't everyone's, and I'm not convinced there's a paradigm that would suit both usage patterns.
I personally won't upgrade to 1.9 if they don't fix that. Even with simple code snippets the ruby 1.9 solution has caused too much pain to even consider it as an eligible option. I personally rather switch to groovy or python than ruby 1.9. The way Ruby 1.9 handles encodings sucks. Period.
BTW the author has written on that subject several times and he nows it quite well.
If you think the way 1.8 handles (or doesn't handle) encoding is just fine, try things you typically do, but with a different language.
ruby -ryaml -e'p YAML.dump("こんにちは！")'
IMO, not having an encoding associated with some text sucks if you're a non-english speaker.
I'm sure people who don't speak English will read that and answer you straight away ;)
The main (runnable) documentation file is here: http://github.com/candlerb/string19/blob/master/string19.rb
Loss of text data is bad.
Mostly, though, it's because some of these characters are overloaded. If you've got a Windows system, go into the DOS window and type "chcp 932" (you may need the Japanese language files installed). When you type '\', you'll get '¥' (making "C:\Program Files\" look like "C:¥Program Files¥").
In the systems where what become CP932 were first used, the backslash wasn't necessary in Japanese, so that character point was used to encode the yen symbol. Other systems used the backslash, so it was encoded as a different point. When JIS unified the existing Japanese code pages, it couldn't very well go back in time to change all that old data, so it merged the two encodings on many things. So, there's only one Unicode codepoint for the yen glyph ¥, but in this one encoding there's two different characters for it.
This is the most blatant example of a problem with Unicode transcoding, but as far as I know, it's not the only one.
See http://email@example.com/msg02337... for what could be done, but probably won't.
Rules for dealing with legacy encodings:
1. They make no sense.
2. If you think they make sense, remember that you weren't there so refer to rule 1.
There's no need for any data loss to occur - the String class would merely not support converting from non round-trippable encodings.
Yes, they're common enough (especially in Japan) and encodings have to be baked deeply in if you really want to use everything that a Rubyist expects to be able to use.
The point there though is that Japanese DON’T want \ and ¥ to map properly. They want \ and ¥ to be considered the same. So Unicode isn’t losing information, it’s forcing a distinction the Japanese don’t want to be able to make.
Han Unification. Gory details here: http://en.wikipedia.org/wiki/Han_unification
 a backlash other than the OMG-2-bytes-per-ASCII-character hysteria, which is irrelevant for these purposes, they could just as well have chosen UTF-8
My understanding was just the opposite: now that strings are associated with encodings, he can no longer assume that a1 + a2 results in a string with the same encoding as a1 and a2, since a1 and a2 can have different encodings.
The reality may be a bit differently, but I recall seeing an email message from Matz on ruby-core last year suggesting that it was supposed to be trivially easy to work with one encoding (specifically mentioning UTF-8, but implying others).
More likely it's political. Matz (the creator of Ruby), and many of its early contributors, are said not to like Unicode.
Matz (and the people he works with who use Ruby to get their jobs done) needs access to data that's Not Unicode. Painfully Not Unicode as in it doesn't necessarily round-trip.
All in all, this is a huge transition which will take a while to propagate through the hole Rails stack.
This is not a trivial screwup! This is the sort of screwup that should make everybody who's using that wretched platform think thrice before continuing to use it.
It's a pretty straightforward tradeoff. Of course people who are not Japanese will naturally be upset to pay a cost in complexity for a feature of benefit primarily to a programmers from a single country. Non-Japanese Ruby programmers will just have to decide whether their solidarity with Japanese programmers outweighs their personal and collective inconvenience.
> It's a +String+ for crying out loud! What other
> language requires you to understand this
> level of complexity just to work with strings?!
But the rest of the world works differently.
Data can appear in all kinds of encodings and can be required to be in different other kinds of encodings. Some of those can be converted into each other; some Japanese encodings (Ruby's creator is Japanese) can't be converted to a unicode representation for example.
Also, I'm often seing the misunderstanding that "Unicode" is a string encoding. It's not. UTF-(8|16) is. Or UCS2 (though that one is basically broken because it can't represent all of Unicode).
Nowadays, as a programming language, you have three options of handling strings:
1) pretend they are bytes.
This is what older languages have done and what ruby 1.8 does. This of curse means that your application has to keep track of encodings. Basically for every string you keep in your application, you need to also keep track what it is encoded in. When concatenating a string of encoding a to another string you already have that is in encoding b, you must do the conversion manually.
Additionally, because strings are bytes and the programming language doesn't care about encoding, you basically can't use any of the built-in string handling routines because they assume each byte representing one character.
Of course, if you are one of these lucky english UTF-8 users, getting data in ASCII and english text in UTF-8, you can easily "switch" your application to UTF-8 by still pretending strings to be bytes because, well, they are. For all intents and purposes, your UTF-8 is just ASCII called UTF-8.
This is what the author of the linked post wanted.
2) use an internal unicode representation
This is what Python 3 does and what I feel to be a very elegant solution if it works for you: A String is just a collection of Unicode code points. Strings don't worry about encoding. String operations don't worry about it. Only I/O worries about encoding. So whenever you get data from the outside, you need to know what encoding it is in and then you decode it to convert it to a string. Conversely, whenever you want to actually output one of these strings, you need to know in what encoding you need the data and then encode that sequence of Unicode code points to any of these encodings.
You will never be able to convert a bunch of bytes into a string or vice versa without going through some explicit encoding/decoding.
This of course has some overhead associated with it, as you always have to do the encoding and because operations on that internal collection of unicode code points might be slower than the simple array-of-byte-based approach.
And whenever you receive data in an encoding that cannot be represented with Unicode code points and whenever you need to send out data in that encoding, then, you are screwed.
This is a defficiency in the Unicode standard. Unicode was specifically made so that it can be used to represent every encoding, but it turns out that it can't correctly represent some Japanese encodings.
3) Store an encoding with each string and expose the strings contents and the encoding
This is what ruby 1.9 does. It combined methods 1 and 2: It allows you to chose whatever internal encoding you need, it allows you to convert from one encoding to the other and it removes the need to externally keep book of every strings encoding.
You can still use the languages string library functions because they are aware of the encoding and usually do the right thing (minus, of course, bugs)
As this method is independent of the (broken?) Unicode standard, you would never get into the situation where just reading data in some encoding makes you unable to write the same data back in the same encoding as in this case, you would just create a string using this problematic encoding and do your stuff on that.
Nothing prevents the author of the linked post to use ruby 1.9's facility to do exactly what python 3 does (of course, again, ignoring the Unicode issue) by internally keeping all strings in, say, UTF-16. You would transcode all incoming and outgoing data to and from that encoding. You would do all string operations on that application-internal representation.
A language throwing an exception when you concatenate a Latin 1-String to a UTF-8 string is a good thing! You see: Once that concatenation happened by accident, it's really hard to detect and fix.
At least it's fixable though because not every Latin1-String is also a UTF-8 string. But if it so happens that you concatenate, say Latin1 and Latin8 by accident, then you are really screwed and there's no way to find out where Latin1 ends and Latin8 begins.
In todays small world, you want that exception to be thrown.
What I find really amazing about this complicated problem of character encoding is the fact that nobody feels it's complicated because it usually just works - especially method 1 described above that has constantly being used in years past and also is very convenient to work with.
Also, it still works.
Until your application leaves your country and gets used in countries where people don't speak ASCII (or Latin1). Then all these interesting problems arise.
Until then, you are annoyed by every of the methods I described but method 1.
Then, you will understand what great service Python 3 has done for you and you'll switch to Python 3 which has very clear rules and seems to work for you.
And then you'll have to deal with the japanese encoding problem and you'll have to use binary bytes all over the place and have to stop using strings altogether because just reading input data destroys it.
And then you might finally see the light and begin to care for the seemingly complicated method 3.
Sorry for the novel, but character encodings are a pet-peeve of mine.
I mean, Unicode is meant to be an abstract representation of glyphs, separate from any encoding, that works for all of Earth's languages. It's tailor made to be a programming language's internal representation of a string. This is its raison d'etre.
So it seems to me that #2 is definitely The Right Way™ and that if there's some problem with Unicode that has kept Ruby from adopting it, they should have worked on fixing it, rather than breaking Ruby. OK, "break" is probably too strong a word for the state of Ruby 1.9. And in the real world, fixing an international politicized standard like Unicode is probably impossible. So I can see that this pragmatic solution might have been the only one available. But still, it seems wrong to me.
Out of curiosity, what exactly is the deficiency in Unicode that caused Matz to go with option 3? I presume there are epic flamewars all over the internet about this issue, but I just haven't been paying close enough attention.
[EDIT] I think I missed my point slightly. Python 3 doesn't change the encoding of strings, it decodes them to unicode. You can encode the string back to the original encoding without loss.
 clarified example
>>> s = u'\xa5' # shiftjis decoding of \
>>> print s.encode('shiftjis')
>>> print s.encode('utf-8')
The first suggestion seems like the logical solution to me however I don't need to deal with this stuff on a day-to-day basis...
This is why most C++ teams prohibit their members to overload operators.
Not sure that's a good ratio.
There's a hidden "it turns out that" in this sentence.