The IANA TLD list should never be used directly. What you really want instead is the Public Suffix List [1]. It will help you determine the "effective TLD" of domains like amazon.co.uk or sflawlib.ci.sf.ca.us, and gives you more insight in to how a technical allocation at the root transforms politically in to implementation.
Yes, it’s weird to have a maintainer asking people not to use their project, but the PSL was a very specific (and unfortunate) hack for a very specific (and unfortunate, and browser-created) problem. It is something we live with, not something we like. While the ideal world is “don’t use any list at all, use the protocols as God, the IETF, and IANA intended”, if you are going to use a list, using the IANA list, updated daily, is much better than the PSL.
Do not use the PSL for anything that is not “cookies abusing the Host header”
Are you still adding suffixes to the list? If so, wouldn't refusing to add new suffixes help with the issue? If no new organisation can make use of PSL to link their subdomains, then they are only left with SOP. Since the list stays like it is now, no existing websites, depending on the list suddenly break down.
We are. Deliberate sabotage like that would take quite a while before it was noticed, however, and it wouldn’t magically fix cookies and how people use them.
To the extent it is used by cookies, we still want to maintain a fair and equitable solution. However, we also want to actively discourage any new users or use cases, to the extent possible, while we also try to fix cookies.
Ideas likehttps://github.com/privacycg/first-party-sets provide a possible model. While FPS doesn’t directly address this, as part of keeping a narrow scope, the approach to explicitly expressing boundaries is one that has the best viable path. However, that’s effectively “Deprecate the Host option for cookies”, so... that’s a big task.
Simply sabotaging the PSL doesn’t force the problem to be solved, so mostly, it’s an education campaign of “We made a mistake; learn from ours, rather than repeating it.”
Interesting. Does this mean that - without your Google hat on at least - you would prefer that references to the PSL were removed from the CA/B BRs as well?
WebAuthn ends up relying on the PSL as well (via a concept of "registrable domains" and WHATWG). Presumably you'd want that to just require Same Origin instead?
And then also, please don't parse the PSL yourself. Use an existing library[1] for that. The formal algorithm for finding the TLD is more complicated than you realise at first glance.
I would recommend libPSL[2]. It was written by one of the maintainers of GNU Wget and is currently used by both GNU Wget and libcurl.
Disclosure: I was one of the early contributors to libpsl and closely involved in its initial formation.
For one, it appears to handle punycode internally.
Punycode is a method for encoding international unicode into ascii prefixed with XN--. So this will correctly associate cookies for either the unicode term or its punycode ascii equivalent.
As exanples, wikipedia indicates the international domains with the most name registrations as being the russia's "рф", taiwan's "台灣" and china's "中国", which are represented in DNS as "xn--p1ai", "xn--kpry57d", and "xn--fiqs8s", respectively.
The data appears to indicate hosting sites where users can register their own names against a providers domain ( username.example.com ) as well as exceptions to this where the host's own site then uses subdomains ( www.example.com, admin.example.com, cdn.example.com ) and the host's cookies should still be used.
It lists specific tlds, wildcards where appropriate with *, and notes exceptions to wildcarding by prefixing with !.
Certainly far from something that would be impossible to write your own parser for, but getting everything right on your first go would be harder than one might expect, and getting things wrong here would be likely to leak the user's information between various sites.
Is actually hard to implement correctly and interoperably, even among browsers, and there are sharp edge cases along the way (such as holes within domain trees).
The author of the library referenced at least worked with the PSL maintainers and browsers to make sure they were faithfully and correctly implementing things :)
Here's one for ya. Legacy third/fourth level domains on ".ca". They were provincial/municipal respectively. Although new issuance was discontinued in 2010, they still exist.
For example, posts from ansuz.sooke.bc.ca have been popular on HN. Sooke is a municipality, BC is a province.
I'm glad the PSL exists, but it's a really ugly hack. The public-ness of a domain should be a queryable DNS record. We shouldn't make tons of programs bundle what's essentially an out of band zone file when we could just use DNS as intended.
Yes, I know about the PSL DNS query service. That's only marginally better. The public suffix flag should be a record on the domain itself.
It concluded because, despite everyone agreeing they wanted a unicorn, they couldn’t agree which breed of unicorn they wanted, and thus were unable to get one.
Which is to say: things went wrong in circles because different folks had different problems, those different problems had incompatible requirements, and so things spun in circles for a number of years as every idea failed to solve the problem for everyone simultaneously, while narrowly-specified solutions were shot down for not solving enough problems.
Thankfully it exists and seems to be about one of the only sources in identifying 2nd and lower level TLDs.
I remember reading how browsers were accepting cookies that'd be sent to many other sites on the same host level because the browsers had no idea on what authority level to differentiate.
Yes, apparently "SLD" is quite a common term but it does not encapsulate the likes of 'pvt.k12.ma.us' or 'uk.com' (in the sense that it's not a registry per se for the latter)
So there's an awkward way of defining things in various contexts, like cookie permissions, whether a domain is registerable, whether it is a registry the next level up
uk.com is second level, pvt.k12.ma.us fourth. I think you really look words for denoting political separation (instead technical), but I don't know any good terminology for that. Public suffix is probably best term to differentiate between "shared" and "dedicated" namespace.
Technically though, root servers have information only about TLD-s. co.uk may be special for some people, but a.root-servers.net. don't care, it's still sends you to uk TLD nameservers, like for every other second level domain.
That political separation or what counts as "registry" is itself very tricky. Are dynamic DNS services registries? How about hosting (like blog) service provider, who gives subdomain/hostname for every client?
Well, exactly. Canonically they're often referred to as TLDs in the context of "this trusted authority delegates sub-level hosts" but it means something different to many clients.
As the original comment suggested it's a bit of a non sequitur to call anything below the initial host label to be a TLD. I was just suggesting there isn't a generally accepted moniker for what they're called.
The public suffix list makes some kind of definition, mainly for cookie-level permissions.
"... transforms politically in to implementations."
Is this list controlled by Mozilla. Or perhaps some group of browser oranisations/companies.
Personally, not speaking for any other user, I am not really a fan of the browser deciding what is or is not an "acceptable" TLD, because the browser is not the only program I use for generating and sending HTTP. I use a variety of programs. Perhaps if I have some control over the browser's list. For example, I the user can add or subtract "TLDs".
In the past I have done this by running an edited copy of the root.zone on the local network. I think it is a cleaner, less application-specific, solution than relying on a list compiled by a browser vendor(s).
Browsers can easily override the "IANA TLD list" as well as the DNS I set up on the local network. I am not saying they are doing this through this list, but the capability is there. Browsers like Firefox are certainly not shy about constantly manipulating how domains and urls display in an address bar, Chrome wants to "protect" the user from "evil" pages, etc. It is a slippery slope. I like the idea of overriding the IANA but not the idea of this being outside the control of the user, decided by some browser vendor(s). I do not want/need applications making decisions about what is or is not a TLD, or in this case what is a legitimate subdomain for purposes of cookies. I already do that through control over the zone files I serve and system resolver settings; I control, i.e. filter, cookies through a local proxy.
The root.zone has grown exponentially and is full of cruft now thanks to the "gTLD" scheme, as others have noted. If you really care about this stuff, I don't think you can rely on someone else to address the problem for you. Mozilla or whomever produces the "public suffix list" is no doubt tied to the online ad industry in some way, directly or indirectly.
It's not just cookies, it defines pretty much all aspects of a browser's security between websites. It sets the boundaries for cross-site scripting. It limits the scope of SSL wildcard certificates. And it's used to determine which part of the domain name gets highlighted in the URL bar.
Which kinda sucks because this is functionality that should be supported but and not require a global list to work. I should be able to set origin policy on any domain I control.
This is a case of the browser vendor trying to solve a problem for the user that the browser vendor itself created. In this case, cookies from Netscape. At some point the risk outweighs the benefit. That's why you have GDPR.
I worked with some slightly crazy businessmen who were tricked by an out-of-work “domain name consultant” into putting in an application for some new gTLDs.
They got one, and cashed out another application to let someone else take it which got them all their application fees back plus a decent chunk of cash.
They were genuinely convinced their terrible new gTLD was going to make them $100 million a year.
My main job was to stop them from blowing what money they had in reserve on insane publicity stunts for long enough that they woke up and realised they had been conned.
Eventually they woke up but had spent something approaching $2m finding that out. I stopped them spending at least as much again.
“Tell me again why you want to hire ten hot air balloons to fly over this stadium...?”
There were only 279 TLDs back then, or 32 TLDs if you excluded all the country code TLDs. Now there are 1508 TLDs, or 1260 excluding country code TLDs.
> There were only 279 TLDs back then, or 32 TLDs if you excluded all the country code TLDs.
Arguably that's about 30 too many and really the root of this whole mess. Imho Postel, for all the good he did, in retrospect mismanaged DNS by not establising really any structure or policy, in a way that feels now bit naive/idealistic (and US-centric).
By the time ICANN took over dot-com bubble was already knocking at the door and laissez faire anything goes attitude of DNS pretty well cemented, so any drastic changes would have been difficult to accomplish.
Essentially by leaving the legacy TLDs completely open and mostly without restrictions or hierarchy/structure their meaning was eroded away, and if the TLDs have no meaning then it's only logical to throw them away.
One of my side projects is https://twitter.com/diffroot a twitter account that publishes changes to the root zone (except for nameserver IP address changes). Alongside that I have a couple of Twitter threads commenting on the changes.
It's also amusing to see if you can spot a .brand TLD being used for real services, where the brand is not an Internet company. The biggest one I know of is SNCF.
Thank you for this service. It is actually the only Twitter account I have push notifications enabled for (though I need to figure out a better solution when there is a lot of churn).
FWIW, I know at least one non-tech company that was already using .brand as an internal TLD and spent the money to avoid a name collision.
If those TLDs were actually useful, then the domain name system would be in much better shape than it actually is.
In reality, you have .gov for government, .com for business, .org for organisations who don’t care how much traffic they get, local country TLDs for if you operate a service for one country only, and tons and tons of garbage.
com is the only TLD that has any value for international commerce. It doesn’t matter that .netflix exists, because it will never be used for anything productive. The problem is that you can only ever count on a person knowing .com and their local TLD. Everything else either won’t register with people as being an actual domain name, or will look like a scam to most people.
The internet is locked into this system, and it’s one that cannot possibly scale. The explosion in new TLDs is an attempt to address that. But we don’t need to worry about how disgusting it is, because it’s an attempt that has failed.
I would suggest two issues that are more concerning is that there is nothing that seems to be a realistic alternative or solution, and that the way this problem has played out has diminished the actual usefulness of domain names, with that gap being filled by the google search engine.
Local hostnames have never been supported by the IANA or anyone else as far as I know. Using them is risky because they usually work, but the specs say they shouldn't.
Local domains are easy to implement right by registering any domain name (even a free one) and setting that as your local DNS domain. If you register pantalaimon.gq and set that as your local DNS suffix, any non-FQDN hostname should be resolved to host.pantalaimon.gq. Entering http://netflix/ will resolve to http://netflix.pantalaimon.gq/. Such a system also gives you more control over your local DNS, as you can do more than A and AAAA records now. Only http://netflix./ would that actually constitute as a FQDN and bypass the local domain.
I used to use internal DNS infrastructure without a domain until I realised all DNS queries were being sent to DNS because the DNS suffix my ISP appended (something like example.com), resulting in occasional queries to servername.example.com that failed.
If you use domains to refer to localhost, just use .localhost as the TLD: it's been reserved for that exact use.
A/AAAA records on gTLD level are not allowed by ICANN. ccTLDs may have these records (probably just an oversight from the past) and some do (http://ai. for example)
Maybe not as much as an oversight but (as I understood) mostly due the fact that most ccTLD registries predate the existence of ICANN. As the ccTLDs as such weren't issued by ICANN, there's no ICANN policy that would apply to them for stuff like this.
Countries view ccTLDs as their sovereign property and territory on the Internet, and refuse to be (involuntarily) bound to any rules or requirements as a matter of sovereignty. It’s a huge source of geopolitical conflict within the IANA/ICANN split (and with DNS in general).
A number of countries have _voluntarily_ agreed to follow ICANN’s principles for good management and interoperability [1], but jurisdictions gonna jurisdict I guess.
It’s supposed to be the non-profit TLD. While I’m sure plenty of them need to market themselves, org domains tend to be more focused on providing services to existing users than marketing services to new users. Things like schools, churches, open source projects...
Yep. The custom TLD thing is an abomination. Many of us said at the time that we'd get the proliferation of stupid that you highlight. ICANN said it wouldn't happen, because economics or good faith or something. Who was right?
Marketing people actually know & plan for this. Check out this blog post about "The Lifecycle of Lead Generation Channels" [0].
It basically says, that any given channel starts out small and high-quality, so engaging is easy and credible. As it grows marketers have to be smarter and larger to stand out. Read it as a consumer, and they're basically talking about a scorched-earth loudness war for your attention.
I can agree that .com is cluttered (with non-companies, even), and that a wider and semantic namespace could be a good thing.
But in my example -- I can't imagine a site that belongs under exactly one of these TLDs. So any site that falls under one of them, conflicts mentally with a site of the same name under at least one of the others. So this hasn't expanded the namespace at all.
Instead, it has poisoned the namespace. Did you want joe.photo or joe.photos? I literally just made up that example, and oh look, it's the same site. Of course it is; the moneygrab was successful. Because otherwise, users will not remember which one was right, and the two sites are competing for the same name with the same meaning.
I’d love $myhandle.sucks but alas the domain registrar decided to charge extortion rates in the hopes that large companies register their own domain to prevent hate sites >_>
Incidentally, if the $185,000 you're about to spend on a new gTLD registration is bringing you down, you could use the money to register icann.sucks instead:
$HUGECO can and likely will still buy it as an insurance policy against the costs ever dropping to the point that mortals might afford it.
$ whois google.sucks
...
Registrant Organization: Google LLC
...
Name Server: ns4.googledomains.com
Name Server: ns2.googledomains.com
Name Server: ns3.googledomains.com
Name Server: ns1.googledomains.com
I agree with @jrockway below. There is no point in retaining TLDs the way are now. The original idea of TLDs were to have separate namespaces.
For example, apple.com is the company apple and apple.fruit, may be a fruit seller. This never worked though. At the end, the companies ended up having to register all the TLDs or someone else would get apple.dong and pretend to be related to apple. ICANN decided to use the opportunity for money grab and started releasing new TLDs every now and then.
It makes all sense to get rid of usage of TLDs as they are today. If apple is <any subdomain>.apple, that's it. People would know that apple.dong is something related to dong. It might sound to be far fetched, but it is not. Once people see the flooding of TLDs (like handshake TLDs which are easily and cheaply available to general public on Namebase (namebase.io) or Bob wallet(Bob wallet.io)) and when they realize TLDs are the new equivalent of .coms, they would realize that it's just the TLDs that matter for auth aspect and the subdomains are more functional within the company (like mail.google and chat.google)
The only people to lose are the scammers and ICANN.
.calvinklein? .bananarepublic? A clear money-grab that has no benefit to users, complicates validation and security for developers, and seals off vast swathes of the namespace for the sole use of corporations.
Some companies (ex https://calculator.aws/ ) are using it for shorter URLs, while still being descriptive.
Sure some are just doing it because they can, other's have no good use case yet. But I fail to see how .bananarepublic being in the hands of one company is a detriment to me... the average internet user.
It's a money grab for ICANN, precisely. Neither users nor developers have any say in this process, and the body that stands to benefit financially from accepting trademarks as TLDs is _not_ going to be acting in the interest of users or developers, are they?
My argument wasn't specifically about .bananarepublic or .calvinklein. It was more that I don't believe trademarks should have been admitted, full stop. There's no way ICANN can make impartial decisions here that benefit the bulk of Internet users.
I reserve judgement on generic TLDs, although I really don't like the implications to user confusion caused by .photo, .photos, .pics and the like.
Has the money from the heist been sustainably invested? I'd love to see it guarding the openness and availability of the internet infrastructure as a whole.
> complicates validation and security for developers
No? If this broke things for you then I've got very bad news about how broken those things already were.
Only textual exact matches of A-labels work for figuring out if names are the same. Guessing that if parts of the name seem kinda similar maybe that's enough just introduces endless weird security bugs.
And the new TLDs didn't change that, example.com isn't example.org and example.photos isn't example.pictures or example.snaps or example.selfie
If you think you need "effective TLDs" then you actually wanted the Public Suffix List, and the reason that's what you wanted begins decades ago in several ccTLDs and isn't the fault of this cash grab at all. Remember to thank Mozilla again while you're there.
It's not technical breakage that's the problem: it's the mental breakage. This proliferation of TLDs confuses users and makes domain names less recognizable, all for no benefit to anyone but ICANN.
I don't understand why we even have TLDs, and don't just register names at the root level. Sure, it's nice to be able to shard data structures among many providers (.com can be different servers/infrastructure/rules than .net) and might have been a technical necessity "back in the day" (though there weren't many shards, so I doubt it), but now it's actively harmful. You found a company called foobarcorp and register foobarcorp.com... and some jerk registers foobarcorp.net, foobarcorp.info, foobarcorp.sucks, etc. Why even allow this? Let there be one and only one foobarcorp.
Yes, I'm bitter that Google gets google. but I'm stuck with jrock.us. Why does it cost millions of dollars to remove one dot from my domain name? There is no technical reason. Maybe it's time to overthrow the default root servers and start our own Internet.
It's not the company name TLD's that are most of the problem, it's the most generic words that are the problem. Amazon tried for .amazon and that's currently in a big dispute ( https://en.wikipedia.org/wiki/.amazon ). What if they setup a service called River and used river.amazon?
They also bought .bot, google did .app. Those are where the real problems are - because now "big internet" controls the TLD and can do almost anything they want with it.
On the flip side, lots of people like to use CC TLD's. So if you happen to like your .fm domain (like say di.fm) that's great... except now you are beholden to the Federated States of Micronesia (which most people probably couldn't even point out on a map).
Google can have google for all I care. I am less than happy with them staking their claim on generic terms with the "new" TLD and trying to present document.new redirecting to Google docs as the best thing since sliced bread.
I find it really nice that we have county based TLDs. If I google search for places to buy XYZ and I see xyz.ca, I know in buying from a Canadian website.
Also, you can have country specific information, eg immigration.co.uk, which is pretty obvious who it applies to.
You really can't guarantee that if you buy from a .ca website that it will be a Canadian company.
I bought from a .no, and was surprised when I was hit by a large customs fee when I went to pick up the package. The company ended up actually being registered in the Netherlands and failed to disclose that they weren't collecting the duty tax. Unfortunately I wasn't able to get any of the consumer groups in Norway or the Netherlands to take any action over it.
Agreed we should have names at the root level. There's a new DNS protocol that enables this called Handshake. It's an alternative root zone where anyone can register a root TLD through an open vickrey auction (unlike ICANN which often works behind closed doors...).
THIS is the way of the future. Hell, it's not even the future anymore. TLDs should have been the very first thing that got decentralized. (ok, SECOND behind money) ICANN is a monopoly whose corruptness should be criminal. Example: they make Google a partner to sell TLDs as a registrar. Therefore, google can give priority and precedence to searches for TLDs on ICANN's registry and not a competing registrar (like Handshake). Before this (Handshake), where could you go to get an alternate competing top level domain? Answer: nowhere. It's a monopoly by definition.
That's great but the vast majority of people can't access your site. Btw if you are recommending your own service it's considered a courtesy to state your connection.
We extracted this project from Domainr (https://domainr.com), using tooling that updates the database each day. It’s formatted as a single text file (zones.txt) and associated metadata in JSON files. We also generate a Go package for our own uses (the tooling is written in Go).
It’s similar to the PSL, but where the PSL has wildcards and inverted matches, ZoneDB explicitly lists each “known” zone, including retired or withdrawn names.
Also .hotel and .hotels. And .photo and .photos (and .photography). Plus .ink and .inc. And many more "confusingly similar" despite ICANN rules that were supposed to prevent that. Money talks.
I agree there's serious potential for misleading customers, but also see the occasional merit of having both, ie, if you own Hank's Hotel you'd want the .hotel tld to correctly identify your business, and likewise helpmefind.hotels makes more sense than helpmefind.hotel. These are my arbitrary examples that do not outweigh the potential for the fraud of someone registering hanks.hotels maliciously. I think icann is a horrible entity and never should have existed.
On a different note, I like how many tlds there are now. .pizza is my personal favorite.
Did you mean: .hoteles and .hotels? (.hotel was not there as of 2020-09-11T07:10Z)
Amazingly enough, they seemingly had enough intelligence to determine that .hotels and .hoteis (Portuguese for ".hotels") are similar enough and warrant the exclusitivity [1], but not for .hotels and .hoteles. At the least .hotels is not yet delegated but merely proposed and only passed the initial evaluation, though.
Hi all, a question slightly related to this topic...
Is there an easy (and free) way to get hold of all the registered domains under a TLD or ccTLD?
I know that services like [0] exist, but they are paid for and the validity and collection of data is dubious. Why aren't zone files generally and freely available?
Is there a way to download or mirror the DNS data?
I like how there are two top level domains for my city of about a million people: .cologne and .koeln
Is there any other town represented twice? OK, places like Berlin, Hamburg, London or Paris don't have the advantage of different spellings in English and a local language. But there's only .wien, no .vienna. How about .tokyo -- is there a puny-coded Japanese version?
I looked at the list and ZERO grabbed my attention.
Turns out its a private GTLD for Amazon.
Reading Amazons's application for the ZERO GTLD (linked at https://gtldresult.icann.org/applicationstatus/applicationde... ) makes me angry. It's completly bland. You could use their application to register any string under the sun. It's not clear what benefits it offers for the public. These types of domains should not be allowed.
Nothing, technically speaking. But legally and economically speaking I think it's a bad idea.
I personally think there should only be a very small handful of TLDs: com, edu, org, gov and maybe a few others. Having a limited number of TLDs communicates to the end user what kind of site it is (government, educational, commercial, non-profit, etc.) and reduces your domain footprint online.
When you allow ".sucks" to be a TLD, now you've basically opened up a new market of squatters and blackmailers forcing companies and individuals to buy up every possible potentially damaging TLD of their trademark or brand[0].
If you allow any arbitrary TLD, be prepared to employ a full DNS police force because tons of people acting in bad faith are going to register every possible typo under the sun in order to capitalize on people's mistakes ("apple.con", "apple.cpm", "apple.vom", "f---.apple")
I agree with this only so much as to protect the user with information on what type of site they are visiting. Org for non-profit or clubs, Net for networks, Com for commerce, nation tld’s and gov. The arbitrary TLD’s are really to keep certain organizations from owning the internet because of how name registration works. Humans are corrupt.
.org could still be for non-profits and national TLD's could still be managed by governments. .com, .net, meanings btw, are completely irrelevant nowadays.
My idea is not to cancel the meaning of .org, but rather create other possibilities for names.
What's the difference really between a 1000+ TLDs and a 100,000+ TLDs?
I understand your arguement, but it seems to me a separate problem independent of what I proposed.
As an evidence, the problem your describe already exists under the current amount a 1000+ TLDs. It won't be a new problem arising from my proposal.
Since it's a separate problem, there should be a separate discussion on how to solve it.
- Should we "cancel" TLDs altogether and just allow entities to register arbitrary sentences as names (why not?)?
- Should Internet companies be called apple.com instead of apple? (the problem is less harmful for non-internet business, right?).
- Should we remove only similarly-sound TLDs (com vs cons)?
- Should we tolerate that apple.com belongs to Apple while apple.con belong to different legal entities, in the same name that two companies in different countries can have the same name?
I could go on and on with possible solutions, though my point was only to demonstrate it's a separate problem.
I think we should only allow country (including the EU as a smi-country) code domains plus .int for international organizations and .net for things that don't belong to any country (example: IETF).
I am unsure whether 2 or 3 letter country codes are better.
Also, I think that there should be semi-standardized second level domains. Example (cc means any country code): .com.cc (commercial) .edu.cc (anything related to education) .uni.cc (only universities and higher education) .org.cc (non commercial entities) .gov.cc (executive) .jus.cc (judiciary) .lex.cc (legislative) .mil.cc (military) .b.cc (for banks and other financiao institutions) .name.cc (for personal websites)
Presumably they didn't want to spend a tremendous amount of money for no clear purpose?
Most outfits which registered a brand or company name as a TLD are purely throwing away money here, either because they didn't understand what they were doing or out of sheer vanity.
You can maybe make an argument for a handful of very big technology companies that have some sort of plan for what they'll do with a TLD, such as Google, but I don't think Facebook would be on that list.
कॉम
セール
佛山
ಭಾರತ
慈善
集团
在线
한국
ଭାରତ
大众汽车
点看
คอม
ভাৰত
ভারত
八卦
موقع
বাংলা
公益
公司
香格里拉
网站
移动
我爱你
москва
қаз
католик
онлайн
сайт
联通
срб
бг
бел
קום
时尚
微博
淡马锡
ファッション
орг
नेट
ストア
アマゾン
삼성
சிங்கப்பூர்
商标
商店
商城
дети
мкд
ею
ポイント
新闻
家電
كوم
中文网
中信
中国
中國
娱乐
谷歌
భారత్
ලංකා
電訊盈科
购物
クラウド
ભારત
通販
भारतम्
भारत
भारोत
网店
संगठन
餐厅
网络
ком
укр
香港
亚马逊
诺基亚
食品
飞利浦
台湾
台灣
手机
мон
الجزائر
عمان
ارامكو
ایران
العليان
اتصالات
امارات
بازار
موريتانيا
پاکستان
الاردن
بارت
بھارت
المغرب
ابوظبي
البحرين
السعودية
ڀارت
كاثوليك
سودان
همراه
عراق
مليسيا
澳門
닷컴
政府
شبكة
بيتك
عرب
გე
机构
组织机构
健康
ไทย
سورية
招聘
рус
рф
تونس
大拿
ລາວ
みんな
グーグル
ευ
ελ
世界
書籍
ഭാരതം
ਭਾਰਤ
网址
닷넷
コム
天主教
游戏
vermögensberater
vermögensberatung
企业
信息
嘉里大酒店
嘉里
مصر
قطر
广东
இலங்கை
இந்தியா
հայ
新加坡
فلسطين
政务
The end-game is that everyone is going to have their own suffix for their website. And the first part of the hostname will standardize into, I don't know, maybe "com" for the commercial part of your entity, "org" for the more community-oriented part, "net" for projects that have to do with interconnectivity, etc. Maybe even regional ones like "us" and "co.uk".
Maybe even some sort of routing system, just spitballing here.
/my-shop/com.shopify
/elections/gov.whitehouse
Surprised nobody has thought of something like this.
The only problem I see with this system is that the ICANN could get greedy and possibly sell this conventional "com", "net", "org", etc prefix system to the highest bidders and centralize them to just a few suffixes for us to choose between, then we'd be forced to register our websites as prefixes of a small oligarchy that owns the handful of suffixes. :/
I think the end game will be that the domain part of a URL will become optional so that it is valid to enter just a TLD in the browser and the company that owns that TLD can redirect.
Essentially TLDs will become the new domains and only companies will be able to afford to buy one, but they will do it for the prestige (like the equivalent of owning your .com today).
More likely, the whole system will just collapse and so we will use a private organisation who provides a service linking approximate names to websites, something like a telephone directory but it doesn't require you to correctly spell things and get the right prefix. It will occasionally be a problem where you search for "Honest Company" and it takes you to honest.co.fraud, but if it does that too often I guess we will switch to a competitor. I guess the main solution to that problem is to have a list of different possible matches, and require the user to pick the right one.
Whereas I don't disagree here, it does break how DNS is supposed to work.
A client machine can 'belong' to a DNS namespace, e.g. mycorp.com. Once it does, the DNS resolver should append the namespace automatically onto any non qualified DNS query. So looking up 'server01' would automatically try to resolve 'server01.mycorp.com'.
Therefore in a browser, entering 'https://google' in the address bar should result in a DNS resolution attempt of 'google.mycorp.com'.
However, [almost] all browsers, now override that, and prepend 'www.' and append '.com' to any unqualified address BEFORE it hits DNS. If you drop the 'https://' the browser assumes it's a search query and just sends it off to the default search engine.
It's another example of browser providers riding roughshod over established standards under the guise of 'ease of use'.
1. https://publicsuffix.org/list/public_suffix_list.dat