Hacker News new | past | comments | ask | show | jobs | submit login
Google Wants To Operate .Search As A “Dotless” Domain (techcrunch.com)
48 points by seminatore on April 11, 2013 | hide | past | favorite | 43 comments



A .cloud TLD for "projects hosted in cloud platforms" sounds like an incredibly pointless and stupid idea. We never had .nix, .win32, .php, .asp, .intel or .amd TLDs to signify server properties no user should have to know about in the olden days of not worshipping the cloud. Why do we need that now?


On the other hand, we did have .us, .ca, .uk, .nz...

It's always felt to me like we need a public-registration equivalent to ".int" -- a TLD meaning "this domain represents an internationally-distributed project, not associated with any particular company or organization, nor local to any particular country." (The Webkit project is a good example. Right now it's a .org, but there is no "Webkit Organization." Most .io domains are also really in substitute of a good TLD for organization/country-neutral OSS projects.)

.cloud sort of seems like a good candidate for that. There might be a better one, though.


We still do. The US may neglect their TLD but they are heavily used elsewhere in the world.


.com has basically become a catch-all which represents that.


.com was the catch-all, but I imagine the whole point of opening up TLD-space is to undo that.


So why can't it continue to be the catch-all, when anything else doesn't work?

Opening up the TLD-space and .com being the catch-all aren't mutually exclusive, after all.


Because the US government claims jurisdiction over ".com".


... So? This still doesn't explain why it can't be a catch all. Just because it's managed by Verisign and falls under US jurisdiction doesn't mean it can't be a catch-all. Someone is going to have to manage a "domain [representing] an internationally-distributed project, not associated with any particular company or organization, nor local to any particular country.", and it more than likely will be a US company.

Just because the US ultimately has jurisdiction over .com doesn't mean that all .com domains are associated with a US company. That's what .us is technically for. That's why there's a ccTLD for the US. .com is meant to be a catch-all, and there's absolutely no reason it can't do that. It's doing its job quite well, and has been for years.

The request for .int is someone being pedantic. Even if it's granted, it probably won't catch on and be anywhere near the popularity of .com anyway.


.int is already in existence: http://www.un.int/. US jurisdiction over .com is problematic because the US government routinely seizes .com domain names of websites that are legal in other countries. This doesn't affect me much as a law abiding US citizen, but it's a huge problem for 100% locally legal sites outside the US that want to have an international name.


I meant his request specifically. To make it public, or have a public alternative.

---

So? All governments routinely seize domain names belonging to their respective ccTLDs. ThePirateBay just got .gl seized, and I believe .se will be seized soon too.

In any case, there will be someone having jurisdiction over the domain name. The TLD will fall under, most likely, the US' jurisdiction, so they'd still have seizing power. If it doesn't, then whatever organization, and, through that, country it's based in, will have their own laws, and their own seizure policies. And the laws will most likely conflict with some other country's laws.

This is completely beside the original point that was made, though. Nowhere in the original comment I replied to did the author mention jurisdiction or seizures.


I assumed that the original request for an international TLD was implicitly referencing the seizures and other downsides to country-specific TLDs, since they are frequently discussed on HN. Perhaps I misinterpreted the intention of the request.

Also, I don't think it's necessarily true that one country will always have jurisdiction over specific Internet names. The Internet isn't "done"; both the net and the concept of "jurisdiction" can change over time.


Marketing! "It's not really the cloud unless it's dot-cloud." And some people would believe it.


So, Google operates the search TLD as a redirect to the search engine of choice, but then they get all the info on what the search was for as they redirect. That's a huge advantage for Google.


All major search engines support https, which will become the norm eventually and not empower Google (or another redirector) to sniff packets.


https won't prevent the whole redirect request to be logged and stored in an decrypted format. Google will take a comeplete search string (search/#search=things+I+want+to+buy), and redirect that . the search parameters are perfectly visible for the redirect server.


This defies my understanding of HTTPS. I thought that in an https request, an encrypted connection would be made to the host first and the request itself (including the query string) would be transmitted as an encrypted stream. Could someone please enlighten me?


In this case, Google would operate the endpoint a http://search. They may redirect or proxy to your registered preference for a search engine, but they have still answered the original request. So if you used an automatic search tool (like the search box built into your browser) that used that address, Google would see http://search?q=question as the request and THEN have to decide what actions to take (redirect, proxy, etc). Users that just went to http://search and THEN entered their question would not show Google their queries, if redirected to their engine of choice, but if Google just proxied their search engine choice, they would still see everything.


This seems like it is going to cause issues with corporate networks. We use http://search for our internal search portal. My guess is many other companies do the same. Obviously, the local urls will resolve first, but what happens when browsers and other software expect http://search to conform to a particular api/url pattern?


Not only that, but how does ICAAN expect to get around DNS search domains that aren't there by choice, but by default?

Almost every ISP I've used has provided a DNS name as part of their DHCP info, and if apex of TLDs start having anything more than NS records, it seems like a lot of things could break until browsers/netadmins/etc. can come up with a fix. Based on how the IPv6 rollout is going, that doesn't inspire confidence...


Are you speaking about an api or a url pattern after the response has been received from a uri:search? I believe such information would be set by the responding search server.

Also, if this thing kicks off in a good way, there is no reason why internal corporate networks shouldn't start supporting the same api/url as the internet standard.


Current browsers search whenever you type anything into the address bar. I don't see any reason to change that and add a cumbersome 'http://search. The http:// should indicate you are not using the browsers search shortcut.


the comment you're replying to is talking about DNS, not a browser keyword.


I'm just waiting for an appliction for portal. or intranet.


Google apparently want to destroy the current very well used concept of local domain name, and which mean they either need to change how computers resolve names or introduce inconsistency and large delays in what developer get when their programs do name resolving. Beyond that, inconsistency in users experience means all from confusion to security issues.

Maybe they should first try this with their own browser experience? Make chrome eat up local domain names and see how good that goes. At worst, they just send users and business users to a other browser such as Firefox, and if its such a good idea, they can show graphs of people that flocked to use chrome because of this idea.


This is exactly one of the concerns I mentioned: https://news.ycombinator.com/item?id=5353171. It is already exhibited by other controllers of TLDs: http://ydal.de/a-records-on-top-level-domains/

I havent received a reply yet to my request to reopen the public discussion on the new gTLDs: https://news.ycombinator.com/item?id=5351335. I'll be chasing up on it today - I would appeal to the rest of you to do the same and help stop these gTLDs ever seeing the light of day.


http://AI/ has address 209.59.119.34

http://BO/ has address 166.114.1.28

http://CM/ has address 195.24.205.60

http://DK/ has address 193.163.102.24

http://GG/ has address 87.117.196.80

http://JE/ has address 87.117.196.80

http://KH/ has address 203.223.32.21

http://PN/ has address 80.68.93.100

http://TK/ has address 217.119.57.22

http://TO/ has address 216.74.32.107

http://UZ/ has address 91.212.89.8

http://VI/ has address 193.0.0.198

http://WS/ has address 64.70.19.33


This seems like 'innovation' for the sake of 'innovation'.

What's wrong with good old .com. It's not like anybody really cares about the other gTLDs (which few exceptions).


Because all the good domains are running out.


Would it be crazy to scrap the TLD system altogether and build something better in its place? I'd love to allow for wildcard TLDs, or even remove the dot requirement altogether. Built in unicode support, but with a layer of security to prevent against similar-looking character abuse.

I know that very idealistic and probably naive, but the current system just feels very archaic to me. Can't we do better?


301 days ago when the bidding started I posted this:

https://news.ycombinator.com/item?id=4109767

I guess I told you so?


This is no different than if you made google.com.mytld.com, is it? You can, of course, still bypass default suffixes by using a trailing dot.

That is to say, http://search./ or http://apple./ in your example, or http://google.com./ in mine.


It's obvious that your google example is bad, as google redirects you to the host without the dot as soon as it can. So if you really were running a local .com domain named google.com, it wouldn't exactly be easy to get to the real google.


Can't say we didn't see that one coming. I've always wondered why people only talked about subdomains for bought TLDs.


I assumed this was the idea. Why wouldn't Google want http://google./ to work?


.search should be run independently of any current search engine operator, including google.


So why has this not been banned and that contractually enforced by ICANN already?

The potential for both technical and social confusion here is enormous and without a standard, the browser wars and other totally random momentum on the issue will just increase!


All domains have to have a dot, no? As in, they have to end in a dot, like http://www.google.com./ or http://search./


Yes, and no.

Technically, in DNS all domains are stored with a dot at the end, that is correct. Since you don't see this last dot in most places, including in URLs it is usually ignored.

Example of a dotless domain that is in use today: http://dk/


So Google expects everyone will hit http/search while logged in so they can get the users preference and redirect him?

Can't open the full letter here but that's what i got from the article.

Seems silly and naively evil.


I think dotless is a security risk as far as social engineering.


Why do you say so? https and certificates will still be around.


hmm. what about it makes it more risk than arbitrary domains?

I'm assuming we're talking about things like:

    http://search/

    http://weather/


Well, http://search/ would be different from http://search.com or http://sear.ch/ for example.

I can imagine a lot of scenarios where people could easily be tricked into accessing the wrong URI if these dotless domains become common place.

The same applies to companies whose intranets already use some common words as internal services.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: