
Show HN: Use any text as a domain name - daenz
http://github.com/amoffat/hash-n-slash
======
TeMPOraL
I might be missing something, but I completely don't get this idea, especially
with the examples provided by the author. I have two major concerns:

> _Bind searches to domain names, eg "food in chicago" =>
> f02970848a63988965aa40cd368ffcf9046209ca.com_

This IMO is bad, and goes in completely wrong direction. We've invented search
engines to have such phrases _not_ bound to a particular domain. Who would
handle the "#://food in chicago" domain? Would it be Google? Bing? Yelp? Local
restaurant chain? Or maybe some scammers? And who would maintain the
completely different website "#://food in Chicago", and why "#://Food in
Chicago" wants to silently install me some malware?

The reason searching for such phrases makes sense, while having them as
domains does not, is that things like "food in chicago" are poorly defined,
fuzzy concepts. It would feel weird to change one letter in a query, or
replace word "food" with, eg. "something to eat", and see completely different
website. Moreover, major search engines are more or less egalitarian wrt.
buisnesses. Yes, there's the whole SEO thing, but you can't get full control
of what food joints are listed near your location just because you've managed
to get the register first. I can (and do) trust listings from Google; they
have both incentives and track record of being fair. I will never trust
listings from random-autogenerated-squat-scam-business-site.

Which brings me to the second point,

> _Good domain names are pretty scarce. It 's a source of frustration for
> anyone who has ever tried to buy a domain._

Yes, they are, and the primary source of frustration is that they are mostly
taken by various squatters and other scums of the Internet. What will happen
is that, the moment there's any real possibility such hash-domain scheme is
introduced, all those evil people and companies will take all the domains like
"#://microsoft", "#://android" and "#://insert any popular keyword or phrase
here" in order to sell them back to real businesses for boatloads of money.
And then we'll be back to square one, with maybe a little bigger domain space
than we have right now. Bad people win, good people loose and nothing changed.

So, again, the concepts behind this idea elude me.

~~~
larrys
"and the primary source of frustration is that they are mostly taken by
various squatters and other scums of the Internet"

"Scums of the Internet"? Oh please.

Where are you getting this all from exactly? You've just decided that since
you weren't able to buy a domain [1] that you wanted at a price that _you
could afford to pay_ that all domains "are mostly taken by various squatters
and other scum of the Internet".

I mean "scum"? How unfair that something that you want isn't available at a
price YOU can pay. And if that price was affordable that someone else wouldn't
have _beaten you to buying it_ (which is already what happened, right?). I
mean it would just be sitting there because nobody else ever thought of using
or buying, say "hackernews.com" until PG decided to start Hacker news.

Or perhaps you think that domains are a "public trust" and that there should
be some official board designated that decides who is worthy of a particular
domain and whether they are "using" that particular domain "the right way".
You know to make things fair.

Is that it? Further you are talking about .com because in general and with
maybe a few exceptions, you can find many names in other TLD's (you just don't
want them) or make a slight modification and have the domain you want in .com.

By the way are you aware that google owns duck.com and refuses to sell it [2]
at any price at all? (And has turned down $500,000 for it iim.) Of course they
aren't "using it" [3] and only have it because they bought the company that
previously owned it

At least if it was owned by a person such as you refer to it would be
available for sale at some price.

[1] Or perhaps that wasn't even the case maybe you have just read about other
people or know someone that this was the "aggrieved" party?

[2] I was involved in trying to do this. And I communicated and had back and
forth with high level people at google who in the end simply said "sorry not
interested at any price". And this isn't the first time this has happened with
a company either.

[3] It redirects to google.com as if they need that traffic.

~~~
Houshalter
Squatters are scum. They add nothing of value and make a resource artificially
scarce for their own profit. If domain names had no value and people could
just buy another one, then squatters wouldn't be making money in the first
place.

~~~
lostlogin
I agree with you. However making a resource artificially scarce to push the
price up is what humans do. From where I am sitting I can see multiple items
where this happens gold, diamonds (wife's ring), water (in a glass)
electricity (well, a turned in light), several things with brand names on them
(the copyrighted logo seems to keep that price up by ensuring that only one
supplier exists, keeping supply low). I'd be quite surprised if there was much
in the room which wasn't artificially price inflated.

~~~
Houshalter
Sort of but it's not exactly the same thing. Physical items generally take
effort to build or to extract, and they generally pay for the land or
materials or whatever. Copyright is meant to artificially inflate the value,
for the benefit of the author who wrote it.

If there is a case of someone just inflating the price and adding literally no
value at all then I would say that is just as bad.

------
arn
Reminds me of "RealNames". Dot-com era company. $130 million in funding.

[http://en.wikipedia.org/wiki/RealNames](http://en.wikipedia.org/wiki/RealNames)

 _RealNames was a company founded in 1997 by Keith Teare. Its goal was to
create a multilingual keyword-based naming system for the Internet that would
translate keywords typed into the address bar of Microsoft 's Internet
Explorer web browser to Uniform Resource Identifiers, based on the existing
Domain Name System, that would access the page registered by the owner of the
RealNames keyword._

~~~
byproxy
Also, AOL Keywords.

~~~
eli
And new.net, though I think that was basically malware. There were a lot of
"DNS Alternatives" in the early 2000s.

~~~
swatkat
New.net was terrible. It used to inject DLL into IE, hijack Winsock LSP and
whatnot!

~~~
eli
Yup, a bad idea implemented poorly.

------
Patrick_Devine
The is a horrendously horrible idea, for the same reason why unicode domain
names are a bad idea. Domain names are important because they provide a
reasonable amount of trust. If I type [http://apple.com](http://apple.com),
I'm 99.99999% certain that I've connected to Apple's website. This gets nasty
with unicode, because a person can spam your email account and get you to
click on a URL which looks very similar to something like apple.com, but
really points to a malicious site (thank you cyrillic characters).

Hash based domain names would be even worse. You have no idea what site is
lurking behind some big string of hex digits. You could argue that a person
should just compare the hash to some known set of hashes, but that's a.
cumbersome and b. unrealistic. If it's done by humans, it's error prone (a
malicious site could spoof the first few chars to point to their site), and if
it's done by computers, what's the point? You've now effectively created a
really shitty replacement for DNS.

~~~
mjolk
>The is a horrendously horrible idea, for the same reason why unicode domain
names are a bad idea.

So, you suggest that non english speakers should just "learn it" to use DNS?

~~~
Patrick_Devine
Until we can figure out something that doesn't suck, yes. Look, I'm
sympathetic with a huge chunk of the world's population having to deal with a
(potentially) unfamiliar character set, but what can you do about it? There
has to be some standard that everyone can agree to which doesn't compromise
the integrity of the net.

~~~
jawns
This might at first sound like a xenophobic, anglocentric position, but it
actually makes quite a bit of sense. There are a number of instances in which
it is advantageous / critical to adopt a lingua franca -- that is, a single
language that everyone in the world agrees to use for a particular purpose. In
the case of domain names, Patrick is right: If we allow characters that appear
to be one thing but are actually another, it opens up a whole bunch of evil
possibilities.

~~~
jessedhillon
Couldn't this also be mitigated by making all domains mandatorily delegated
from a national ccTLD? Eliminate/redirect apple.com to apple.com.us, then make
sure it conforms to the rules for US domains (Latin character set only, etc)
whereas apple.com.ru comes under the rules of the Russian registrar?

~~~
mjolk
I like this solution. However, ICANN already made profit on .us/.ru domains,
when I really think that they should have come as 'free' suffixes to
registrations.

e.g.

I buy 'apple.com' and while if I want to leave it as this, fine. However, I
should also have 'apple.com.us' and 'apple.com.ru' so that I can handle these
appropriately. It's not perfect, but it at least gives my users a chance to
say "hey, I probably prefer (english|russian), so please give me that page."

Of course, this is also a bit lazy and somewhat of a non-solution, as this
only addresses the issue for English speakers. A russian speaker using the .ru
namespace is already willing to "play by the ascii rules."

People going to 'apple.com' really expect to go to the webpage of the American
electronics company. One could assume this by the TLD. Users sending a request
to 'apple.рф' would be doing something somewhat strange (user sends english
base label, followed by a cyrillic tld). This isn't that absurd though, as
english company names become loanwords (at least in russian -- see "xerox" or
even ask a russian if he owns a 'yabloko makkniga pro' or an 'apple macbook
pro' for example). Should the presence of a non-unicode TLD trigger country-
specific mode in browsers for the sake of security? How do we handle loanwords
(spoiler to above: russians say 'apple' when referring to the brand, even
though it shadows the actual Russian word for apple) with non-ascii TLDs?

~~~
nfoz
IMO different language versions of a site should not be based on TLD, but
rather a subdomain: us.apple.com, ru.apple.com

------
colmmacc
This mechanism is half way to a suggested scheme for domains that are less
vulnerable to single-actor take-downs, posted here on HN a few days ago;
[https://news.ycombinator.com/item?id=6964090](https://news.ycombinator.com/item?id=6964090)
.

In short; instead of merely mapping to [hash].com the extension could map to
[hash].com, [hash].se , [hash].ly, [hash].is, [hash].ch and then use a quorum
consensus of whatever answer 3 or more of those names agree on. Effectively
each TLD registry (and each of your registrars), along with their regulatory
environment, would lose the ability to take down your name without
international agreement.

For certain niches, such a feature might be a good enough value proposition to
ordinary users to convince them to install an extension.

Other observation; 36-ary is probably a better encoding for the hash data than
hex. DNS isn't great with lengthy answers and every byte is worth conserving.
But it's cool to see something interesting like this in the form of a browser
extension.

~~~
mtrimpe
Or even ZCode32 to help with readability for when you need to copy written
hashes. :)

------
splatzone
This is really cool. This is my favourite kind of idea; one that changes
something fundamental in a succinct way.

This makes me wonder how important domains are at all. My mum never even
thinks about the domains for the websites she visits, she just types in 'ebay'
and Google does everything for her.

The only time I think about URLs (outside of coding) is when I have to share a
link with someone, but I wonder if even that could be replaced with a
sufficiently advanced search engine.

~~~
snowwrestler
Google search indexing might be one downside of this system, as Google tends
to give a lot of weight to domains that match your search term. But in this
case the actual domain is a "nonsense" hash.

Perhaps the browser extension could be set so that whenever a search term is
entered, it submits Google searches for both the raw text and its hash. If
Google has indexed a domain that is a hash, and that exact hash is submitted
as a query, you would get the right result as #1 every time.

~~~
splatzone
According to Moz it's hardly important at all: [http://moz.com/search-ranking-
factors](http://moz.com/search-ranking-factors)

~~~
snowwrestler
Exact match domain has the highest correlation of any on-page factor in that
table--even higher than keywords in the title or H1.

------
pudquick
I would be in favor of this idea if a little more thought was put into how the
hashing function works.

As it stands, someone typing in:

food in Chicago

will get a different URL than:

food in Chicago

And the same goes for: Chicago food, chicago food, food near Chicago, etc.

Every one, with a single character difference (extra space, different word
order, capitalization difference, regional spelling like theatre vs theater,
etc) will result in a different hash.

You've now made 'humanized URLs' into 'no one will guess your domain'.

It's an interesting approach to avoiding search engines, but it doesn't solve
the problem that search engines do solve: multiple similar but different
entries resulting in the same "appropriate"/top website result.

With this approach, not even face book, Facebook, and facebook would result in
the same .com (and please don't suggest just purchasing a billion domains and
redirecting them all).

~~~
coherentpony
>As it stands, someone typing in:

>food in Chicago

>will get a different URL than:

>food in Chicago

Why?

~~~
vezzy-fnord
IDN homograph attack.

~~~
scintill76
Are you sure? I just pasted both into my UTF8-encoded Linux terminal running
`od -tx1` and got the same hex octets. (Also tried in Chrome's JS terminal
"<paste string>".split("").map(function(s) { return s.charCodeAt(0); }) ) It's
quite probable I don't know what I'm doing, so I'd appreciate knowing how to
do it properly.

Also, since these strings would be typed, I'm not sure the homograph attack
applies. Why would someone slip in a Cyrillic letter or something while typing
the URL themselves? If extended to clickable links that displayed the pre-hash
text, I could see the issue, but pudquick specifically said "someone typing
in" the two URLs.

------
jrochkind1
To the extent that TLD's are namespaces of hostnames, it's just adding the
equivalent of another TLD, but implementing it as a weird proprietary
extension on top of .com.

Why?

~~~
MichaelGG
Some people don't remember RealNames and other keyword systems, and thus must
reinvent them to realise it's a bad idea?

------
chavesn
My initial thought: This is terrible, because how would we ever know which
sites we can trust? Something as tiny as an extra space would change the hash
value.

But on second thought, the real problem is that we (the web technology
community) have assumed domain names are even a remotely suitable proxy for
trust. I don't think most common web users actually get this point. That's why
phishing is so easy (except for the part about getting a phishing email past
spam filters).

Do you think most people really know (or notice) the difference between
webaccess.bankofamerica.com and webaccess.bankofamerica.x8.co? I doubt it.

So the real fix for this situation is creating a _true_ trust system that most
actual end users can understand and rely on.

Then, it seems only natural for something like this to be the future. UUIDs
will act as the underlying addressing technology with " _whatever you want_ "
as your display name.

And as a bonus, it will really cut down on the cybersquatters' profitability.

------
asperous
One downside of this is that you're using a secure one-way hashing function.
This means that if for some reason you have the hashed url you can't get the
clean one back.

You could use something something like base64 instead.. might work better but
it would remove the ability to use files as domain names.

~~~
biot
A TXT DNS entry could identify the unhashed text, eg:

    
    
      @    TXT   "v=sha1reverse; food in chicago"
    

The browser could look this up, verify it, and display "#:// food in chicago"
in the location instead of the hashed domain name.

------
glesica
Fun story, some of the older (hehe, older as in "older than 30 or 35) people
here might remember that in the 90s there was a startup that did exactly this
but with a browser plugin as I recall (well, not exactly, they didn't hash the
text, but they took arbitrary plain text and mapped it to a URL). The idea was
to sell short sentences to companies, so "seinfeld TV show" (or whatever) in
the address bar would have gone to the NBC "Seinfeld" web site, etc. I think
the idea was to make "deep" links easier for people to remember, but I don't
remember the details, I just remember that it existed and I probably read
about it in PC Magazine.

------
ffk
This reminds me about how many people don't use domain names at all anymore.
Many people probably type the name of their bank into the search bar and visit
the first result. I've seen people literally pull up google to search for
yahoo mail or ebay.

Domain names are still useful, for a start they provide some level of
authentication when mixed with cryptography. (If you visit
[https://news.ycombinator.com](https://news.ycombinator.com), you can be
relatively sure there is no MITM with certain conditions present).

It would be interesting to see how this system can be adapted to work with our
current Internet infrastructure.

~~~
GilbertErik

      --2014-01-01 16:45:59--  https://news.ycombinator.com/
      Resolving news.ycombinator.com (news.ycombinator.com)... 198.41.191.47, 198.41.190.47
      Connecting to news.ycombinator.com (news.ycombinator.com)|198.41.191.47|:443... connected.
      ERROR: The certificate of `news.ycombinator.com' is not trusted.
      ERROR: The certificate of `news.ycombinator.com' hasn't got a known issuer.

------
jyap
This won't work because to me this is similar in concept to URL shorteners.
The difference being in the shortening algorithm and the use of expensive
domains.

So for example with current shorteners you have:

[http://shorturl.com/{algorithm](http://shorturl.com/{algorithm) for unique
URL goes here}

In the above case using a browser plug in can also eliminate any server side
resolution of domains.

With this proof of concept:

[http://{algorithm](http://{algorithm) for unique URL goes here}.com

... Except the implementation costs $ if it is to be accepted... And to be
accepted it needs to have a benefit that isn't solved by URL shorteners.

------
omegote
Aaand another Chrome extension that bundles jQuery just to do a couple of
element selections that could have been done using querySelectorAll...

~~~
munimkazia
Fortunately, it is open source. Sending a pull request later this evening when
I fix this.

------
cviedmai
In the interview that Eric Schmidt did to Julian Assange a few years ago,
Julian talked about a similar system that I found quite interesting:
[http://wikileaks.org/Transcript-Meeting-Assange-
Schmidt#602](http://wikileaks.org/Transcript-Meeting-Assange-Schmidt#602)

------
bbosh
Well, DNS is nothing but a mapping from a domain name to an IP address. All
this is is a mapping from a piece of text, to a domain name to an IP address.
Alternatively, a mapping from a piece of text to an IP address. In other
words, it's nothing but a domain name with different syntax.

------
gojomo
Neat idea! One quibble: since the essence is specifying a domain-name in a new
way, it'd be good if the 'escaping' convention didnt' clobber/assume a single
protocol/scheme. As it stands, it's not clear how this would be used for other
schemes, even 'https'.

Perhaps allow 'scheme#' (where naked '#' implies 'http#'), or move the
convention entirely to the domain-name area rather than scheme area. ('#food
in chicago' -> '[http://#food](http://#food) in chicago' ->
'[http://f02970848a63988965aa40cd368ffcf9046209ca.com'](http://f02970848a63988965aa40cd368ffcf9046209ca.com'))

------
Rabidgremlin
Nice! reminds me of the "4 little words" thing I put together a few years ago
[http://blog.rabidgremlin.com/2010/11/28/4-little-
words/](http://blog.rabidgremlin.com/2010/11/28/4-little-words/)

------
Aqueous
I dont really want my mid-level domains to potentially be hashes - I want
domains at all levels to be intelligible and, optimally, to be named what they
are, i.e. foodinchicago.com or food_in_chicago.com or food in chicago.nlp_name

Perhaps enabling spaces in domain names is possible? Since spaces in filenames
are allowed I don't see why spaces in names shouldn't be allowed. And then you
could have a default root domain for natural language names - .nlp, for
instance, and then just assume that name when someone types in a natural
language URI with no tld.

We could use something like NameCoin for this TLD to avoid collisions.

------
jlees
As a tangent, I read this earlier today and then got annoyed at not having a
quick URL shortener on hand (I got spoiled at my last job, where I could use
internal shortcuts for complex URLs like create new Google Doc / link to
specific doc, etc).

So, decided to write my first Chrome extension, modelled loosely on your
domain name hashing, and here it is:
[https://github.com/jennielees/jump](https://github.com/jennielees/jump)

It's entirely a personal itch-scratcher, but thank you for the inspiration!

------
tshadwell
I think this could be accomplished without creating a new system of domain
names by using visually similar non-ascii characters in an IDN[1].

"food in chicago.com" becomes "food⋅in⋅chicago.com" becomes "xn--
foodinchicago-lj4hc.com"

[1]
[https://en.wikipedia.org/wiki/Internationalized_domain_name](https://en.wikipedia.org/wiki/Internationalized_domain_name)

~~~
mschuster91
IIRC these "visually similar" characters cannot be registered anymore due to
widespread abuse in phishing/spam emails (päypa1.com, e.g).

~~~
nawitus
They can, but there are limitations. For example, ä can only be used in
certain countries (for example, a country where a large number of people speak
a language that uses ä).

------
jacooqs
I like the concept but it still doesnt the solve the problem:

The concept is that if it were accepted by browsers, devs wont have to
struggle with squatters for domain names. So instead we register the hash for
a word or phrase we want to use and us that as the domain.

BUT. nothing stops the squatters doing the same thing on this new concept. the
squatters will just register the hash leaving you in the same problem as
before

------
lessnonymous
I don't get the benefit .. the only 'pro' I can think of (spaces in domain
names) would be better answered by allowing a space-to-double-hyphen
transcoding so

hxxp://food in chicago.com/ encodes to hxxp://food--in--chicago.com

Pros: * Domains can now have spaces

Cons: * Domains are now case sensitive * You visit a domain and it isn't
reversible * The google-juice assigned to domain names is gone

------
gfodor
just think, if you add a salt that you can change to switch "virtual DNS
networks" then _everyone_ can own their own google.com!

------
neil_s
Domain names are already a layer of indirection, a friendly way to specify IP
addresses. This layer would be a friendly way to specify domain names that are
a friendly way to specify IP addresses. And you gain not much else besides the
ability to add spaces.

Still, the implementation approach is interesting.

------
Shank
It would be really cool if a tool like this could be used to resolve TOR
services' names into hidden service urls. That would make TOR more useful in
general, if you could #://wikileaks, for instance.

~~~
MichaelGG
Hidden service URLs are intrinsic to the security of connecting to them.
That's why they're complicated. By adding a second resolution step, you
introduce a lot of security issues. How is a user assured that "bla:wikileaks"
is the "real" wikileaks, for whatever value of "real" you choose?

------
desireco42
I love the idea, I've been thinking along similar lines.

I would add less strict spelling, ie. spelling correction for existing
domains, and for domains that are not recognized, go straight to search engine
search (ie. google).

------
MWil
My interest was piqued at document archival/confirmation uses.

------
yoyo1999
There was a Chinese company call 3721. They did a similar thing about 10 year
ago. They were acquired by Yahoo. Yahoo 3721 was closed about 5 years ago.

------
EvanHahn
The first thing that came to my mind when I saw this: hash collisions. Could
problems arise from two keywords that hit the same domain?

Really cool idea, though.

~~~
primitivesuave
With SHA1 the chances of that happening are so close to zero that you'd have
to create trillions of keywords before you even approach a minuscule chance of
hitting a collision.

One of my favorite analogies for SHA collisions:
[http://stackoverflow.com/a/4014407/690258](http://stackoverflow.com/a/4014407/690258)

------
tomrod
This is awesome! I wonder if this could be integrated with "I'm Feeling
Lucky"? It would be good for most Google searches IMO.

------
jayzalowitz
759730a97e4373f3a0ee12805db065e3a4a649a5.com is available, someone inform
cutts

------
elwell
this is a bad idea that will never work

/s

~~~
infinitebattery
Why do you think that? Please elaborate! HackerNews isn't supposed to be about
trolling.

Personally, I like the idea and i'm going to use it. However, it relies on
people downloading the extension. If there are enough people using it (which
is definitely possible) then it would be successful.

~~~
pla3rhat3r
Maybe because it says that on the page that was submitted.

~~~
infinitebattery
Okay, true. I concede. It's nevertheless interesting.

------
FrankenPC
This would have some interesting implications for MVC architectures.

------
xd
Nice idea, but I can see the hash clashing a bit with HTML anchors.

------
gbog
This is quite an interesting experiment, and a step in the good direction,
which should and must lead us to the complete removal of domain names.

Why are domain names bad? That should be obvious.

The main symptom of domain names' inherent brokenness is that the law must
patch it so that an unknown squatter cannot kidnap some domain, e.g. register
"france.com" and ransom it to the people who are most qualified to claim it.
This is ridiculous: the squatter should not have had the opportunity to squat
this (I know it doesn't apply for "France" but it applies for many other
names). Nowhere in the world we see kids grab the seat in front of the
fireplace and not let their grandma have it.

Moreover, what if a single ascii string refers to two different things equally
claimable by two groups of people? E.g. what about "francfort.com"? Which city
would it refer to? A contrieved case: If Chinese people chose a
translitteration scheme for their language so that "google" would mean China,
wouldn't they have some rights over google.com?

This level of brokenness is not even because of a leaky abstraction, this is a
sunken boat everyone has to use to cross the river.

On a more philosophical stance, domain names are wrong because they build a
kind of universal language out of nowhere, grounded on nothing, whitout any
kind of democratic digestion and acceptance by human beings. We could have a
universal language shared by all humans, but it would be a very long process
of slow acceptance, with a percolation through all societies all over the
world. In this way, there would be many adjustments, reverts, and eventually
we would come up with a set of names that is good enough, but right now domain
names are just a musical chairs game that is ridiculous and must be stopped.

So I think using a string's hash is a nice step, because it starts blurring
the domain name.

However, I think a much better scheme should be to use the hash of the page
_content_ as the domain name. In this case, once the hash is determined, who
cares where it is, who care which domain it has? It would just be way to
download the content. And the job of search engines would be to point us to
these hashes.

And dynamic content you tell? Which dynamic content? Does one really care
about changes in wiki pages? And for the twitter feed, each tweek is a fixed
content snippet, and the javascript fetching them is also fixed, or could be a
browser extension. And an up-to-date search engine would have the latest hash
for a keyword such as "twitter" or "The Guradian".

A nice side-effect would be that domain name based censorship would become
ineffective. And downloading content could be just some p2p checkouts.

~~~
MichaelGG
How does this solve anything? Instead of registering france.com, I'll register
the hash of "France". Adding a level of indirection hasn't provided any
benefits.

People can already register domains like "food-in-chicago.com" or "for-sale-
baby-clothes.com".

Your contrived example about "Google" meaning "China" in Chinese, is
irrelevant. The .CN registrar can enforce whatever rules they want. The US
.COM registrar can have its own rules. There's no issue there.

~~~
gbog
The problem is that .COM is de facto the universal entry point.

~~~
lmm
De jure too, isn't it? .us exists for US-specific content; .com ought to be
reserved for international sites.

~~~
tjgq
They ought, but aren't. Unfortunately, .com and most (all?) of the other non-
national TLDs are under US jurisdiction.

------
stock_toaster
rot13 domain names are the future? Humorously, tbbtyr.com is already taken.

I wonder about memfrob based domain names.

------
lhgaghl
This is like pet names but pointless.

[https://en.wikipedia.org/wiki/Petname](https://en.wikipedia.org/wiki/Petname)

