Hacker News new | past | comments | ask | show | jobs | submit login
http://to/ World's Shortest URL Shortener
164 points by Raphael on Dec 3, 2009 | hide | past | favorite | 145 comments



It's so short that hacker news doesn't even show the domain in the heading :-)


The fascination with url shorteners is really worrying and useless. Can we get past this fad?


Indeed. They're a solution to a problem that doesn't exist.


I disagree with that statement. Even before the rise of microblogging, URL shorteners were helpful in certain situations.

Which URL would you rather paste in an email for readability's sake:

http://maps.google.com/maps?f=d&source=s_d&saddr=Oak...

or

http://tr.im/Gw8B

In context, the recipient should have no problem anticipating the URL's end point (i.e., you probably just wrote something like, "here are some directions to my house:"), but using the shortened URL makes the email much more readable and prevents any potential screwy scrolling issues that might be caused by an ultra unwieldy URL borking their email software.

That's just one non-microblogging use case for one site that spits out very long dynamic URLs -- there are many use cases for many sites.

TinyURL, et. al. certainly did (and continue to) solve a problem, imho. And they've become even more useful for microblogging sites like Twitter, on which character limit constraints (which essentially defines that type of service) require that URLs be shortened.

[EDIT: I know HN auto truncated that Google Maps URL, but it is a 355 characters long -- and most email software (my use case) wouldn't auto truncate the URL in the same way. So readability would be negatively affected for the recipient by using the long URL instead of the short one.]


The former. Even though it's 355 characters, at least it tells me it's a Google maps link. Which the shortened version doesn't.

There are some situations where URL shortening is arguably useful. But there's absolutely no reason why "microblogging" should be one of them.


You could use this bookmarklet which translates all short URL's in to long ones. But then again maybe you do not trust longurlplease.com.

  javascript:void(function(){if(typeof%20jQuery%20==%20'undefined'){var%20s=document.createElement('script');s.src='http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js;document.getElementsByTagName(head)[0].appendChild(s);}var%20l=document.createElement(script);l.src=http://www.longurlplease.com/js/jquery.longurlplease.js;document.getElementsByTagName(head)[0].appendChild(l);function%20runIfReady(){try{if($.longurlplease){%20$.longurlplease();%20clearInterval(interval);}}catch(e){alert(sadsda)}};%20var%20interval%20=%20window.setInterval(runIfReady,100);}())


That inline bookmarklet didn't work in Safari 4 (OSX 10.6), but the bookmarklet from http://www.longurlplease.com/ did--thank you for sharing it.


Spoken like a person without a Twitter account.


Twitter should solve this problem, they should stick links separately to the 140 characters. Like they don't force you to encode the picture of the person in the 140 characters.

For SMS, well if the link goes beyond 140, send it in a separate SMS? Should SMS messaging compatibility for Twitter break the whole paradigm of transparency of addressing on the web?

Time to move on and create solutions looking forward, not backward.


FriendFeed FTW. You can attach photos [separately], and each FriendFeed post can have a comment thread.


It's not like SMS is limited to 160 characters, anyway.


That's a stupid design decision on the part of Twitter, which the rest of the Internet is paying for.

At the very least, Twitter could only shorten URLs when messages go out via the SMS gateway, and not universally. All they'd need to do is count any valid URL as 20 characters (length of a bit.ly shortened URL) for the purpose of the limit, while preserving the actual real URL up until that limit actually mattered.

The sad and ironic part is that the 140-character limit and the URL shorteners it spawned will probably stick around, due to Twitter, far longer than Twitter-via-SMS does. I don't know many people who even use Twitter via SMS anymore; it was a cool feature initially, but it's being quickly obsoleted by smartphones that can access Twitter via much more user-friendly interfaces via TCP/IP.

"140 characters" is likely to become the "4 feet, 8-1/2 inches" of the Internet. Totally arbitrary, far from ideal, nearly impossible to change.


eg most people.


You got me ;)


Yeah, I don't get why shorteners don't have the option to do something like http://tri.im/maps.google.com/8HkN - short but still insightful (way better than 'visit this preview page')


> it tells me it's a Google maps link. Which the shortened version doesn't

See http://twi.bz/ .

"twi.bz shortens web addresses without completely obscuring where the link ends up. By keeping part of the domain name in the twi.bz link it's possible to instantly see the site you'll end up on by clicking the link."


The former. It shows the reader not only where I'm sending them, but even gives them an idea of what to expect, not to mention the whole dead links problem that will inevitably occur when these sights finally go down.

What I would normally do is something like this "<loads of text> Check out my map here [1] <more text>" and at the end put "[1] http...", that way a giant ten page url doesn't interfere with the message, yet I don't have to shorten it either. If I think the precipitant won't understand what I mean, I'd put more details in, eg [1 below]

The ONLY place where I'd consider using a url shortener is printed material, since its easier to type in by hand than a giant url, but this has the same dead link problem, especially if printed in magazines.


I like your footnote idea, though I think it seems fairly unnatural for casual communication. To me, short URLs seem like the more user-friendly solution.

I'm also still not sold on the dead link problem for two reasons.

1. You're already relying on one site (in this case Google) not to go down or change its dynamic URL patterns. Certainly adding another layer increases the chances of a dead link, but any time you link to something on the web you're taking a risk of sending someone to an error page.

2. Most instances in which you'd use a short URL -- such as email -- are for instant communication in which the recipient is likely to visit that link in the next day or two. In other words, you wouldn't link to a short URL in the body of your web page or blog (something with more permanence on which you want to be sure the link works months or even years from now), but for email or Twitter messages, which are generally fleeting and timely, that matters less. As long as the link works right now then all is good. If the person visiting the links wants to save it for later, they'll more than likely bookmark it, cutting the shortener out of the loop anyway.


Sure, but it doesn't change the "Oh, some random site, I have to click on it to see what it is" vs "oh, its google maps, and I can also see the street in the url, i'll save that for later" loss of information in shortened urls.

I dunno, you can use url shorteners if you want, but Im not convinced of their usefulness, thats all.


1. The dead link problem is that Google, for example, can't control whether a bit.ly link will work but they can ensure maps.google.com links will always work if they want to. And they have a bigger incentive to ensure this than bit.ly do (since they consider their Maps service to be an important service).

2. True, but there is a vast amount of valuable information held in tweets (along with the noise), each with a permalink. A tweet's permalink is useless if it contains a shortened URL which no longer works. Do we really want this body of information to become useless at soon as URL-shortener-of-the-week loses funding and turns its servers off?


That problem should be solved in the 'reader'. If a url looks long, get your email client to show a small version of it. Get it to show the full URL when you hover over it. Whatever. Just do it in the client.

Even on twitter it's a stupid artificial restriction. It would be pretty trivial to show minified links for SMS alerts (Which is a tiny amount of twitter anyway now), and show full URLs for everyone else. Then clients could show the urls how they like.


Sure, since it's definitely easier to change thousands of different email clients, twitter clients, and cellphones in use around the world than it is to write a 10-line web app that wraps a key-value store.


Exactly.

And while they're at it, they could emphasize the actual root domain when showing links.

The fact that scammer links with obfuscated URLs are still successful is shameful.


This tells me Google should consider providing shortened URLs. maps.google.com/a7sga would be great.


That could possibly be implemented as a hash of the included header/response terms.

A simple example could be http://www.google.com/search?hl=en&q=hacker+news+site:yc... -> http://www.google.com/search?#b9fh23

One downside would be that you can't see the query parameters and values in plaintext, but a browser extender / inspector could probably do this.


You can make it much shorter without loss, unfortunately Google Maps doesn't do it for you:

http://maps.google.com/?saddr=Oak+Green+Way+33611&daddr=...


URL Shortener makes sense in this context for sure. I think providing your own friendly paths in web apps you build is a good idea too, so people don't feel the necessity to use a url shortener which is one more thing to do.

The maps URLs are ridiculous. Google should provide their own shortener for these imo. Then you get the best of both worlds, smaller more readable URLs and within a domain you recognize / trust.


And a problem to a solution that does.


a solution to a problem that doesn't exist

I do not use them, but long URLs are a pain in many contexts, and it's not always possible to [link]wrap a tag around it[/link]. Server > client communication has gotten incredibly flexible, but client > server or client > peer communication is still mostly text-driven, and many URLs are not text-friendly or human-friendly.

The widespread use of these shorteners strongly argues that they do solve a problem, just not one that matters to you as a programmer.


If the problem doesn't exist, why do so many people find them useful?


death to tiny urls! I want context back!


I was against shorteners for a long while, but you can't argue against the fact that it lets you fit more in your message when you have a character limit to deal with, such as with Twitter, identica, FriendFeed, etc. I try to restrict my use of shorteners to microblogging. When following shortened links, I just use common sense and take into account how much I trust the source of the shortened URL. In practice, this has worked very well for me.


What if Tweeter and other places that have message length limit ignore the length of the URL in you post?


They can't do that due to text-message length restrictions


They should have their own url shortner.


Every major website should have their own URL shorter. People would be more willing and trusting to click a random link found on Twitter if it's from Gizmodo's short URL rather than bit.ly.

Then again, people shouldn't click random links.


"Then again, people shouldn't click random links."

What does this even mean? The web's primary use case is clicking random links as defined in some context. Shortened links rarely show up in a tweet without any defining context.


No need to build a new one every time; services like http://totally.awe.sm let you set up a URL shortener on any domain. (Disclaimer: the founder is a friend of mine)

And before you say "it's trivial, I could build that in a weekend" I suggest you try it; there is a lot of hidden complexity in something so simple, especially at any kind of scale.


I understand, from looking at this when I first came across twitter, that there is a well established protocol for using multiple text messages to send a single message.

Couldn't they in any case have a convention of putting a hash mark (#) for the URL when texting and then sending URLs in subsequent messages, each message being serially (by time of sending) matched with a # mark?


The concern for me is that they disguise the original URL and if the shortner ever ceases to exist the context of a lot of microblogging posts will be lost for ever.


who cares if the content is lost forever? It's ephemeral data anyway. The emphasis is real-time not all-time.


The stats that you can get out of something like bit.ly are quite useful. Or at least ego-strokingly hypnotic.


I'd say they were largely unuseful and inaccurate :/


I use an extension, like many others, that automatically "follows" every url shortener url on a page. Your stats are likely skewed.


In an ideal world the extension would only be making HEAD requests and the analytics system would only count GETs, right?


In an ideal world, HEAD and GET probably wouldn't exist.

But we're here. The question is, does the analytics system work that way?


Have they just broken every single url regex out there?


URL regexes which were already broken from the beginning...


Indeed. DNS names without dots have been used for decades in intranets.


However what it resolves to depends on you search domain. If I shared http://www on twitter everyone would most likely see a different site. People should really be using http://to. to avoid conflicts with internal servers called 'to'


Not to mention http://museum


No, even worse. They broke many of them... and people won't know which until they are bitten.


The entire point of having a domain "name" is to have a human-readable representation of an address. Take the human readable part out of the URL and you're left with useless.


This is why it makes more sense to run your own URL shortener, and make it discoverable using a pattern like rev:canonical:

http://sites.google.com/a/snaplog.com/wiki/short_url


Because:

  http://milkandcookies.net/2008/07/12/?lang=en_US&cntry=US&source=http%3A%2F%2Fwww.google.com%2Fsearch%3Fq%3D123%26lang%3Den-US&geoloc=-45.001,-53.175
is pretty readable and useful!


I'm not sure if you're trying to make a point here but yes, that is pretty readable and useful.


This is an excellent point.


Can someone give me a quick introduction on how this was done?


1- Periods (.) don't do anything after a domain (ie www.weebly.com.), but they are useful for preventing the browsers from redirecting to http://www.to.com/

2- The real domain we're looking at: "to" -- no "suffix" attached (TLD: top-level domain)

3- The .to registry added an A-record for the "to" domain, which resolves correctly.

[Edit: Looks like .cm does this too:

;; ANSWER SECTION:

cm. 86400 IN A 195.24.205.60]


.cm is owned by a spammer. Nearly anything .cm redirects to agoga.net.



org. is interesting. It looks like the Apache server isn't set up correctly.


Actually the period at the end is part of the DNS standard, without it, your machine's search path is searched first (e.g. in resolv.conf on Unix/Linux)

So if your ISP is AOL, you might have a search path of aol.com so looking up "to" will first try to.aol.com if that exists it will go there. Putting a "." at the end will let it go straight there.

This isn't normally a problem because it's not like aol is going to set up google.com.aol.com. But really everyone should have periods at the end of domains.


Does this mean we should expect http://com./ http://net./ etc?


Didn't CNet used to use http://com./?


no, they had com.com (e.g. http://news.com.com/). I believe that domain is now for sale. Probably gets a TON of typo traffic.


For whatever reason they actually used com.com.


Because it was the dot-com boom, and what was better than one dot-com? TWO!

But yeah, they then proceeded to put everything under .com.com, so they had news.com.com, cnet.com.com etc. It was painfully stupid.


I believe the justification was so they could use unified cookies across all their properties -- they set the cookie for "com.com", and then it was available to news.com.com, search.com.com, downloads.com.com, etc., in the same way that yahoo.com cookies are available to news.yahoo.com, sports.yahoo.com, etc.


A-record by the ISP that runs the .to TLD.


Thx, got confused by the dot after "to" for a second.


No workie on Chrome and IE.


Tommy Boy reference, I'm guessing.


Public, covert racism?


Huh?

And for the record, chrome and ie no worky for me either. Firefox does though.


When you say no workie or no worky, what accent are you mimicking?


A little kid's? Don't know what you are talking about, and a quick google gives no relevant results. Perhaps old meanings die quietly over time? Also, this thread is the top google reference for "no workie" racism, doesn't look like there is much keyword competition.


I can't believe no one here seems to be using http://urlshorteningservicefortwitter.com/


lol wow, amazing landing page text.

'THINK ABOUT IT: If your grandmother sees this link: http://bit.ly/zPWG6 she'll think, "Hmm, does someone want me to fly to Lithuania to get my teeth fixed?"'


I wrote a program to find the shortest URL. Enjoy!

    #!/usr/bin/perl
    #
    
    use WWW::Mechanize;
    
    die "usage: $0 <url>" unless @ARGV;
    
    $name = 'a';
    $m = WWW::Mechanize->new;
    while (1) {
        print "Trying http://to./$name\n";
        $m->post('http://to./', {
            url => $ARGV[0],
            name => $name,
            'Witz that URL!' => 'Witz that URL!'
        });
        unless ($m->content =~ /sorry/) {
            print "You got http://to./$name\n";
            exit(0);
        }
        $name++;
    }


It's polite to add a short sleep between requests. Although, in perl it would be a short select!


On my server this would get your IP blocked at the firewall in a few minutes.


Maybe it's something with my corporate routing/firewall/who knows, but I just get an error saying my browser can't find it (no matter the browser). For that reason I really hope this doesn't catch on.


I think my corporate proxy is blocking this as well, or perhaps our DNS servers or something. In any case, I assume there will be a large swath of people for whom this redirection will not work.


This was my idea originally. I talk to the guy that runs the registrar occasionally (I got burri.to 12 years ago...)


Thank you for this. I now have another example to use when I bitchslap people who don't read domain name RFCs.


Sweet, I got http://to./m/r


Tried to register http://to./index.php

Succeeded, but sadly http://to./ rewrote it to http://index.php instead.


I'm aware that restructuring the Domain Name System is not practical at this point but here's my idea: Domains should work hierarchically and be privately operated. Google Search would be "http://Google/Search/Web ". A company would buy "http://Org " and run a forwarding service so that "http://Org/RedCross " forwarded to the respective site. This would allow "http://a/ " to be a forwarder and, best of all, for the web to be fully recursive. Seems like the possibilities for such a system are limitless. For example, an internet-archive would be the normal site with "archive/" injected. There are of course many complex details and inefficiencies, but it would greatly improve human-readablity, making things more easily-explainable.

This could partially be based on all file-extensions being in the file-data rather than in the name and all folders having an "index" file that represented them (which could, then, be any type of file). I'd like to have an explanation for down-votes, please.


Chrome doesn't really like the URLs, and Twitter doesn't see them either. Awesome, though.


It's a DNS rather than a browser issue, I think


Funny, it's working under my current configuration with Firefox but not with Opera and Chrome translates it in a Google search.

Not the world's best URL shortener...


Yeah, Chrome does the same for me. I can click through the links just fine, but pasting into the URL bar (which you're probably going to have to do until the world updates its URL regexes) only gives me the option of a search, unless the URL's in the history.


Works fine for me in Opera (10.10), whether clicking or typing.


regexpes need to be fixed for it. if you have a sentence with http://facebook.com. - then what is the dot - end of sentence? or end of domain? just having http://to/ will not work, since the resolver will look in the local domain first.


Works with Chrome (Windows) just fine...


works fine in chrome for Linux...


And OSX


bit.ly isn't winning because of the length, but because of the reliability, commitment to persistence, anti-spam, and analytics.

This is blindingly obvious. A few characters shorter just doesn't matter.


My problem with shorteners really boils down to the conceivable scenario where one of them gets hacked and sends everyone to a malicious site.

I also like to know where people are sending me, but that is a secondary concern.


seems they have problems if the same url is used more than once.. the subsequent additions all break .. you can get around it by adding a query string on to the domain your shortening like ?like=this


Oddly when I try to "shorten" their url http://www.to/ it gets mapped into: http://www.to/jNUqgaaD8k

:)


Seems to require a dot in the URL:

This: http://to./z0ba1

Not this: http://to/z0ba1


Depends on your DNS. Works for some without.


The dns works. But the browser doesn't. Without the trailing dot the browser looks up the .com address.

With the dot it works, but then the browser fixes up the url, and removes the dot - so all subsequent requests don't work.

(This is for me, using firefox on linux.)


Fine in Opera.


seems safari users are struggling without the . as well .. probably a webkit issue then

edited for spelling mistake


Works fine here with Google Chrome.


Somehow uses IDN TLD to eschew the TLD altogether. Let the landgrab begin!


No, it is the TLD (for Tonga). Somehow they hawked out their TLD A-record.

(edit: turns out it's actually run by the ISP who operates the .to TLD)


Oh, no wonder its so ingenious


Check out http://ai (my DNS seems not to like it but there is a site). I love the URL. :)


yeah, got root level on the tld .. how did they manage that..


When it comes to national TLDs and how a company can have this level of access... I don't want to know...


Countries can, and frequently do, sell the rights to their ccTLDs to third parties. For example, Verisign leases the rights to the .tv domain off the Tuvalu government for $50m/year.


Not exactly. Its $50MM over 12 years, so more like $4MM/year.

I remember when that deal was announced, during the first bubble. Everyone thought it was foolish, now it seems genius.


For which side?


Getting access to sell all .tvs for $4MM/year is a steal, IMHO.

Not to mention the manner in which they sold them is unique. With most TLDs, all domains cost the same price, and its first come, first served. With .tv, the domains were priced according to their value, with many costing $25,000 or more per year. So for instance I have no idea what mlb.tv cost Major League Baseball, but it was a lot more than $49/year.

Now that .tv seems to have hit its tipping point, $50 million over 12 years is a bargain.


It was a pretty shrewd move for Tuvalu too - $4m/year is about 30% of their GDP.


That's a really good point. And the government of Tuvalu hasa 20% stake in the company which actually owns the contract (Verisign owns the rest).

Very well structured deal for both sides.


Also LibyanSpider.com


Hmm... http://to./ was translated as http://to./mt8hm ! That's longer than the original...



Someday one of these guys is going to switch back to using ftp://. Sure, they'd have to proxy or cache everything (no redirect responses), but hey, it's shorter.


Holy cow, this makes bit.ly look like long-domain-name-i-spent-10-dollars-on-just-to-put-one-snarky-word-in-96-point-helvetica.com


Wondering how many people tried going to long-domain-name-i-spent-10-dollars-on-just-to-put-one-snarky-word-in-96-point-helvetica.com

like me...


Looks like they put up a secret password now.


Anyone brute forced it yet?


thats old news. Dot TK had that for ages. go to http://tweak.tk/ and read the technical part.

They do it better though http://tk./abcde is http://abcde.tk which is even on character shorter ;)


btw: http://tk./ works as well ;)


can you please tell me the actual domain name ? I am dieing for last 24 hrs.



I want to know the real complete domain name of http://to./ and http://tk./ I read all conversation. but didnt get. any help will be greatly appreciated,


http://tk./ is the full domain name. Its just an A record on the TLD. http://tk./ is also available through http://dot.tk/.

I hope this helps you :)


ok, Thanks.


I think users of Squid get sent to http://www.to/ ...


Darn. http://to/life was taken. Oh well. L'chai-im!




Just what the world needs, YAUS


I smell irony.


aww.. they've now added password protection. existing ones still work though


can you please tell me the actual domain name ? I am dieing for last 24 hrs.


It doesn't work with https.


can someone tell me the real domain name of http://to./ ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: