A .cloud TLD for "projects hosted in cloud platforms" sounds like an incredibly pointless and stupid idea. We never had .nix, .win32, .php, .asp, .intel or .amd TLDs to signify server properties no user should have to know about in the olden days of not worshipping the cloud. Why do we need that now?
On the other hand, we did have .us, .ca, .uk, .nz...
It's always felt to me like we need a public-registration equivalent to ".int" -- a TLD meaning "this domain represents an internationally-distributed project, not associated with any particular company or organization, nor local to any particular country." (The Webkit project is a good example. Right now it's a .org, but there is no "Webkit Organization." Most .io domains are also really in substitute of a good TLD for organization/country-neutral OSS projects.)
.cloud sort of seems like a good candidate for that. There might be a better one, though.
... So? This still doesn't explain why it can't be a catch all. Just because it's managed by Verisign and falls under US jurisdiction doesn't mean it can't be a catch-all. Someone is going to have to manage a "domain [representing] an internationally-distributed project, not associated with any particular company or organization, nor local to any particular country.", and it more than likely will be a US company.
Just because the US ultimately has jurisdiction over .com doesn't mean that all .com domains are associated with a US company. That's what .us is technically for. That's why there's a ccTLD for the US. .com is meant to be a catch-all, and there's absolutely no reason it can't do that. It's doing its job quite well, and has been for years.
The request for .int is someone being pedantic. Even if it's granted, it probably won't catch on and be anywhere near the popularity of .com anyway.
.int is already in existence: http://www.un.int/. US jurisdiction over .com is problematic because the US government routinely seizes .com domain names of websites that are legal in other countries. This doesn't affect me much as a law abiding US citizen, but it's a huge problem for 100% locally legal sites outside the US that want to have an international name.
I meant his request specifically. To make it public, or have a public alternative.
---
So? All governments routinely seize domain names belonging to their respective ccTLDs. ThePirateBay just got .gl seized, and I believe .se will be seized soon too.
In any case, there will be someone having jurisdiction over the domain name. The TLD will fall under, most likely, the US' jurisdiction, so they'd still have seizing power. If it doesn't, then whatever organization, and, through that, country it's based in, will have their own laws, and their own seizure policies. And the laws will most likely conflict with some other country's laws.
This is completely beside the original point that was made, though. Nowhere in the original comment I replied to did the author mention jurisdiction or seizures.
I assumed that the original request for an international TLD was implicitly referencing the seizures and other downsides to country-specific TLDs, since they are frequently discussed on HN. Perhaps I misinterpreted the intention of the request.
Also, I don't think it's necessarily true that one country will always have jurisdiction over specific Internet names. The Internet isn't "done"; both the net and the concept of "jurisdiction" can change over time.
So, Google operates the search TLD as a redirect to the search engine of choice, but then they get all the info on what the search was for as they redirect. That's a huge advantage for Google.
https won't prevent the whole redirect request to be logged and stored in an decrypted format. Google will take a comeplete search string (search/#search=things+I+want+to+buy), and redirect that . the search parameters are perfectly visible for the redirect server.
This defies my understanding of HTTPS. I thought that in an https request, an encrypted connection would be made to the host first and the request itself (including the query string) would be transmitted as an encrypted stream. Could someone please enlighten me?
In this case, Google would operate the endpoint a http://search. They may redirect or proxy to your registered preference for a search engine, but they have still answered the original request. So if you used an automatic search tool (like the search box built into your browser) that used that address, Google would see http://search?q=question as the request and THEN have to decide what actions to take (redirect, proxy, etc). Users that just went to http://search and THEN entered their question would not show Google their queries, if redirected to their engine of choice, but if Google just proxied their search engine choice, they would still see everything.
This seems like it is going to cause issues with corporate networks. We use http://search for our internal search portal. My guess is many other companies do the same. Obviously, the local urls will resolve first, but what happens when browsers and other software expect http://search to conform to a particular api/url pattern?
Not only that, but how does ICAAN expect to get around DNS search domains that aren't there by choice, but by default?
Almost every ISP I've used has provided a DNS name as part of their DHCP info, and if apex of TLDs start having anything more than NS records, it seems like a lot of things could break until browsers/netadmins/etc. can come up with a fix. Based on how the IPv6 rollout is going, that doesn't inspire confidence...
Are you speaking about an api or a url pattern after the response has been received from a uri:search? I believe such information would be set by the responding search server.
Also, if this thing kicks off in a good way, there is no reason why internal corporate networks shouldn't start supporting the same api/url as the internet standard.
Current browsers search whenever you type anything into the address bar. I don't see any reason to change that and add a cumbersome 'http://search. The http:// should indicate you are not using the browsers search shortcut.
Google apparently want to destroy the current very well used concept of local domain name, and which mean they either need to change how computers resolve names or introduce inconsistency and large delays in what developer get when their programs do name resolving. Beyond that, inconsistency in users experience means all from confusion to security issues.
Maybe they should first try this with their own browser experience? Make chrome eat up local domain names and see how good that goes. At worst, they just send users and business users to a other browser such as Firefox, and if its such a good idea, they can show graphs of people that flocked to use chrome because of this idea.
I havent received a reply yet to my request to reopen the public discussion on the new gTLDs: https://news.ycombinator.com/item?id=5351335. I'll be chasing up on it today - I would appeal to the rest of you to do the same and help stop these gTLDs ever seeing the light of day.
Would it be crazy to scrap the TLD system altogether and build something better in its place? I'd love to allow for wildcard TLDs, or even remove the dot requirement altogether. Built in unicode support, but with a layer of security to prevent against similar-looking character abuse.
I know that very idealistic and probably naive, but the current system just feels very archaic to me. Can't we do better?
It's obvious that your google example is bad, as google redirects you to the host without the dot as soon as it can. So if you really were running a local .com domain named google.com, it wouldn't exactly be easy to get to the real google.
So why has this not been banned and that contractually enforced by ICANN already?
The potential for both technical and social confusion here is enormous and without a standard, the browser wars and other totally random momentum on the issue will just increase!
Technically, in DNS all domains are stored with a dot at the end, that is correct. Since you don't see this last dot in most places, including in URLs it is usually ignored.
Example of a dotless domain that is in use today: http://dk/