In context, the recipient should have no problem anticipating the URL's end point (i.e., you probably just wrote something like, "here are some directions to my house:"), but using the shortened URL makes the email much more readable and prevents any potential screwy scrolling issues that might be caused by an ultra unwieldy URL borking their email software.
That's just one non-microblogging use case for one site that spits out very long dynamic URLs -- there are many use cases for many sites.
TinyURL, et. al. certainly did (and continue to) solve a problem, imho. And they've become even more useful for microblogging sites like Twitter, on which character limit constraints (which essentially defines that type of service) require that URLs be shortened.
[EDIT: I know HN auto truncated that Google Maps URL, but it is a 355 characters long -- and most email software (my use case) wouldn't auto truncate the URL in the same way. So readability would be negatively affected for the recipient by using the long URL instead of the short one.]
Twitter should solve this problem, they should stick links separately to the 140 characters. Like they don't force you to encode the picture of the person in the 140 characters.
For SMS, well if the link goes beyond 140, send it in a separate SMS? Should SMS messaging compatibility for Twitter break the whole paradigm of transparency of addressing on the web?
Time to move on and create solutions looking forward, not backward.
That's a stupid design decision on the part of Twitter, which the rest of the Internet is paying for.
At the very least, Twitter could only shorten URLs when messages go out via the SMS gateway, and not universally. All they'd need to do is count any valid URL as 20 characters (length of a bit.ly shortened URL) for the purpose of the limit, while preserving the actual real URL up until that limit actually mattered.
The sad and ironic part is that the 140-character limit and the URL shorteners it spawned will probably stick around, due to Twitter, far longer than Twitter-via-SMS does. I don't know many people who even use Twitter via SMS anymore; it was a cool feature initially, but it's being quickly obsoleted by smartphones that can access Twitter via much more user-friendly interfaces via TCP/IP.
"140 characters" is likely to become the "4 feet, 8-1/2 inches" of the Internet. Totally arbitrary, far from ideal, nearly impossible to change.
Yeah, I don't get why shorteners don't have the option to do something like http://tri.im/maps.google.com/8HkN - short but still insightful (way better than 'visit this preview page')
"twi.bz shortens web addresses without completely obscuring where the link ends up. By keeping part of the domain name in the twi.bz link it's possible to instantly see the site you'll end up on by clicking the link."
The former. It shows the reader not only where I'm sending them, but even gives them an idea of what to expect, not to mention the whole dead links problem that will inevitably occur when these sights finally go down.
What I would normally do is something like this "<loads of text> Check out my map here [1] <more text>" and at the end put "[1] http...", that way a giant ten page url doesn't interfere with the message, yet I don't have to shorten it either. If I think the precipitant won't understand what I mean, I'd put more details in, eg [1 below]
The ONLY place where I'd consider using a url shortener is printed material, since its easier to type in by hand than a giant url, but this has the same dead link problem, especially if printed in magazines.
I like your footnote idea, though I think it seems fairly unnatural for casual communication. To me, short URLs seem like the more user-friendly solution.
I'm also still not sold on the dead link problem for two reasons.
1. You're already relying on one site (in this case Google) not to go down or change its dynamic URL patterns. Certainly adding another layer increases the chances of a dead link, but any time you link to something on the web you're taking a risk of sending someone to an error page.
2. Most instances in which you'd use a short URL -- such as email -- are for instant communication in which the recipient is likely to visit that link in the next day or two. In other words, you wouldn't link to a short URL in the body of your web page or blog (something with more permanence on which you want to be sure the link works months or even years from now), but for email or Twitter messages, which are generally fleeting and timely, that matters less. As long as the link works right now then all is good. If the person visiting the links wants to save it for later, they'll more than likely bookmark it, cutting the shortener out of the loop anyway.
Sure, but it doesn't change the "Oh, some random site, I have to click on it to see what it is" vs "oh, its google maps, and I can also see the street in the url, i'll save that for later" loss of information in shortened urls.
I dunno, you can use url shorteners if you want, but Im not convinced of their usefulness, thats all.
1. The dead link problem is that Google, for example, can't control whether a bit.ly link will work but they can ensure maps.google.com links will always work if they want to. And they have a bigger incentive to ensure this than bit.ly do (since they consider their Maps service to be an important service).
2. True, but there is a vast amount of valuable information held in tweets (along with the noise), each with a permalink. A tweet's permalink is useless if it contains a shortened URL which no longer works. Do we really want this body of information to become useless at soon as URL-shortener-of-the-week loses funding and turns its servers off?
That problem should be solved in the 'reader'.
If a url looks long, get your email client to show a small version of it. Get it to show the full URL when you hover over it. Whatever. Just do it in the client.
Even on twitter it's a stupid artificial restriction. It would be pretty trivial to show minified links for SMS alerts (Which is a tiny amount of twitter anyway now), and show full URLs for everyone else. Then clients could show the urls how they like.
Sure, since it's definitely easier to change thousands of different email clients, twitter clients, and cellphones in use around the world than it is to write a 10-line web app that wraps a key-value store.
URL Shortener makes sense in this context for sure. I think providing your own friendly paths in web apps you build is a good idea too, so people don't feel the necessity to use a url shortener which is one more thing to do.
The maps URLs are ridiculous. Google should provide their own shortener for these imo. Then you get the best of both worlds, smaller more readable URLs and within a domain you recognize / trust.
I do not use them, but long URLs are a pain in many contexts, and it's not always possible to [link]wrap a tag around it[/link]. Server > client communication has gotten incredibly flexible, but client > server or client > peer communication is still mostly text-driven, and many URLs are not text-friendly or human-friendly.
The widespread use of these shorteners strongly argues that they do solve a problem, just not one that matters to you as a programmer.
I was against shorteners for a long while, but you can't argue against the fact that it lets you fit more in your message when you have a character limit to deal with, such as with Twitter, identica, FriendFeed, etc. I try to restrict my use of shorteners to microblogging. When following shortened links, I just use common sense and take into account how much I trust the source of the shortened URL. In practice, this has worked very well for me.
Every major website should have their own URL shorter. People would be more willing and trusting to click a random link found on Twitter if it's from Gizmodo's short URL rather than bit.ly.
"Then again, people shouldn't click random links."
What does this even mean? The web's primary use case is clicking random links as defined in some context. Shortened links rarely show up in a tweet without any defining context.
No need to build a new one every time; services like http://totally.awe.sm let you set up a URL shortener on any domain. (Disclaimer: the founder is a friend of mine)
And before you say "it's trivial, I could build that in a weekend" I suggest you try it; there is a lot of hidden complexity in something so simple, especially at any kind of scale.
I understand, from looking at this when I first came across twitter, that there is a well established protocol for using multiple text messages to send a single message.
Couldn't they in any case have a convention of putting a hash mark (#) for the URL when texting and then sending URLs in subsequent messages, each message being serially (by time of sending) matched with a # mark?
The concern for me is that they disguise the original URL and if the shortner ever ceases to exist the context of a lot of microblogging posts will be lost for ever.
However what it resolves to depends on you search domain. If I shared http://www on twitter everyone would most likely see a different site. People should really be using http://to. to avoid conflicts with internal servers called 'to'
The entire point of having a domain "name" is to have a human-readable representation of an address. Take the human readable part out of the URL and you're left with useless.
1- Periods (.) don't do anything after a domain (ie www.weebly.com.), but they are useful for preventing the browsers from redirecting to http://www.to.com/
2- The real domain we're looking at: "to" -- no "suffix" attached (TLD: top-level domain)
3- The .to registry added an A-record for the "to" domain, which resolves correctly.
Actually the period at the end is part of the DNS standard, without it, your machine's search path is searched first (e.g. in resolv.conf on Unix/Linux)
So if your ISP is AOL, you might have a search path of aol.com
so looking up "to" will first try to.aol.com if that exists it will go there. Putting a "." at the end will let it go straight there.
This isn't normally a problem because it's not like aol is going to set up google.com.aol.com. But really everyone should have periods at the end of domains.
I believe the justification was so they could use unified cookies across all their properties -- they set the cookie for "com.com", and then it was available to news.com.com, search.com.com, downloads.com.com, etc., in the same way that yahoo.com cookies are available to news.yahoo.com, sports.yahoo.com, etc.
A little kid's? Don't know what you are talking about, and a quick google gives no relevant results. Perhaps old meanings die quietly over time? Also, this thread is the top google reference for "no workie" racism, doesn't look like there is much keyword competition.
'THINK ABOUT IT: If your grandmother sees this link: http://bit.ly/zPWG6 she'll think, "Hmm, does someone want me to fly to Lithuania to get my teeth fixed?"'
Maybe it's something with my corporate routing/firewall/who knows, but I just get an error saying my browser can't find it (no matter the browser). For that reason I really hope this doesn't catch on.
I think my corporate proxy is blocking this as well, or perhaps our DNS servers or something. In any case, I assume there will be a large swath of people for whom this redirection will not work.
I'm aware that restructuring the Domain Name System is not practical at this point but here's my idea:
Domains should work hierarchically and be privately operated. Google Search would be "http://Google/Search/Web ". A company would buy "http://Org " and run a forwarding service so that "http://Org/RedCross " forwarded to the respective site. This would allow "http://a/ " to be a forwarder and, best of all, for the web to be fully recursive. Seems like the possibilities for such a system are limitless. For example, an internet-archive would be the normal site with "archive/" injected.
There are of course many complex details and inefficiencies, but it would greatly improve human-readablity, making things more easily-explainable.
This could partially be based on all file-extensions being in the file-data rather than in the name and all folders having an "index" file that represented them (which could, then, be any type of file).
I'd like to have an explanation for down-votes, please.
Yeah, Chrome does the same for me. I can click through the links just fine, but pasting into the URL bar (which you're probably going to have to do until the world updates its URL regexes) only gives me the option of a search, unless the URL's in the history.
regexpes need to be fixed for it. if you have a sentence with http://facebook.com. - then what is the dot - end of sentence? or end of domain? just having http://to/ will not work, since the resolver will look in the local domain first.
seems they have problems if the same url is used more than once.. the subsequent additions all break .. you can get around it by adding a query string on to the domain your shortening like ?like=this
Countries can, and frequently do, sell the rights to their ccTLDs to third parties. For example, Verisign leases the rights to the .tv domain off the Tuvalu government for $50m/year.
Getting access to sell all .tvs for $4MM/year is a steal, IMHO.
Not to mention the manner in which they sold them is unique. With most TLDs, all domains cost the same price, and its first come, first served. With .tv, the domains were priced according to their value, with many costing $25,000 or more per year. So for instance I have no idea what mlb.tv cost Major League Baseball, but it was a lot more than $49/year.
Now that .tv seems to have hit its tipping point, $50 million over 12 years is a bargain.
Someday one of these guys is going to switch back to using ftp://. Sure, they'd have to proxy or cache everything (no redirect responses), but hey, it's shorter.
I want to know the real complete domain name of http://to./ and http://tk./ I read all conversation. but didnt get. any help will be greatly appreciated,