I'm extremely in favor of internationalizing the web as much as possible.
But this is a mistake. Just like air traffic control, there needs to be some minimal set of language standards web administrators should have. English had become the de facto standard, and while not optimal (maybe), at least it was working.
Do you know how many international characters look alike but aren't? How difficult it's going to be to debug domain names by sight? You'd be better going back to hex.
Maybe I'm not up to speed on the tooling, but I think this is going to be a mess in the long run.
For current latin-alphabet DNS, domains are case-insensitive. Will that still be true? Will Google have to buy gooGle.com separately? How do you do "case-insensitive" Japanese - don't they have 3 separate character sets for different situations? How do you distinguish among identical-looking characters? It's technologically simple, but there's more than technology involved.
I haven't seen the implementation plan, but it shouldn't require unicode. Current IDNs (currently available below the ccTLD and gTLD level) are implemented with Punycode, and the rollout was mostly a non-issue.
We've got fraud working just fine with ASCII--non-latin character sets don't really change that. For the latin-keyboard crowd, imagine having to type in an umlaut every time you wanted to go to Google.
But this is a mistake. Just like air traffic control, there needs to be some minimal set of language standards web administrators should have. English had become the de facto standard, and while not optimal (maybe), at least it was working.
Do you know how many international characters look alike but aren't? How difficult it's going to be to debug domain names by sight? You'd be better going back to hex.
Maybe I'm not up to speed on the tooling, but I think this is going to be a mess in the long run.