Hacker News new | past | comments | ask | show | jobs | submit login
Tim Berners-Lee: Cool URIs don't change (1998) (w3.org)
115 points by vog on April 28, 2011 | hide | past | favorite | 25 comments



Funny to think how 'document based' the web was back then. The concept of webapps must have been completely foreign. I'll bet he really hates URL shorteners.

Interestingly, the examples he gave of good URIs (e.g., http://www.nsf.gov/pubs/1998/nsf9814/nsf9814.htm) are still working 12 years later, while the bad ones are broken (e.g., http://www.nsf.gov/cgi-bin/pubsys/browser/odbrowse.pl).


I miss the document based web. Actually what I miss is the "amateur web". I remember in 1994, when I first got the internet, typing in "Star wars" and visiting the many many fansites with random pictures, midi files, fan fiction, etc. Those sites don't exist any more. Sure, there are still some amateur sites out there, but they're always pretty professional, not the type done by complete-amateurs who just want to share something they're passionate about.

Do a Google search for any hobby, fictional universe, sports team and see how many pages it takes you to find a "geocities" type of site.


Tools have been built to help amateurs better share what they want. I guarantee Tumblr + Posterous + Weebly + fanfiction.net + wikia have far more 'amateur' sites and content on any topic you're interested in, than the entire internet in aggregate in 1994.


+1, very good point. The discovering these sites is much harder than in 1994. Professional sites dominate search to a degree that finding amateur sites requires much more work.


I think there is way more amateur content published today by passionate users, it just doesn't look amateurish. Wikipedia is the canonical example.

There are tons of users on tumblr and flickr which are passionate about photos. Such photo sites were very rare in the old web.

Blogger, Wordpress and YouTube has a lot of passionate amateurs posting high-quality content. asymco.com is a passionate amateur who provides better content than the professional media.

The old amateur web required users to do everything, including design. Most people can't design well (e.g. choose colors that fit well together). So most people ended up with creating square wheels or "see my 'leet skillz" designs.

The old web was a small village: very small selection of shops, but you knew all of them. The new web is a huge city: shops for everything under the sun, the problem is finding out which shop that sells the stuff you want to buy.


I think a lot of those sites got replaced by blogs, or content specific services - no need to build an image gallery when you can use flickr, etc.


Wikia hosts many fan sites/wikis that would probably have been Geocities pages back in the day. The Wikia sites just don't look like Geocities. :)


There is still a big 'market' in amateur fan sites - thefanlistings.org is a good start if you're into that sort of thing.


3rd party URL shorteners are a horrible idea. Why would we add an extra point of failure to links, the very fabric of the WWW?

Sites that maintain their own shorteners for their own content are really the only acceptable exception, and even then it's still an extra thing that can break.


URL shorteners are already starting to be seen by the anti-spam community as being a little bit like open SMTP relays.

Several of my customers have been listed on the SBL because they ran unauthenticated URL shorteners, and of course, those shorteners were used to get around anti-spam URL blacklists.


Do you mean that their server IPs got blacklisted? Or did the URL shortener domain just get added to a URL blacklist?


Both, if I remember correctly.


Though interestingly, he doesn't really mention 301 redirects. Is the point that URLs shouldn't change, or that changing URLs shouldn't break the web (or a little of both)?

Considering that no URL shortener I know of will actually let you change the underlying URL it points to, unless the hosting service decides to change what the underlying URL actually points to (tsk tsk), the shortener is pretty much a canonical pointer to that data, despite the fact that the hash doesn't often give much information about the underlying data.

Also: see http://www.301works.org


If you actually visit 301works.org you will notice that there are a whopping _seven_ URL shorteners that have uploaded any data at all in the past 6 months:

- rn.tl / lensrentals.com

- nbl.gs

- qr.cx

- tiny.cc

- urlcut.com

- url.ie

- va.mu

And to be honest, those 7 shorteners aren't exactly big fish in the URL shortener business.


I did, and it seems that at least bit.ly data is uploaded daily, but your point still stands - 301works does not have anything close to 100% shortener coverage.

http://www.archive.org/search.php?query=collection%3A301work...


Unfortunately that is not bit.ly uploading any data. It is the effort of a few people (me included) that scrapes bit.ly and uploads it to archive.org.

Despite claiming otherwise, bit.ly hasn't uploaded a single short url to 301Works yet.


Document based web ended by 1995, and by 1998 most of all major sites were webapps. Majority of document based sites were carry ons from early 90s. There was no ruby but there were PHP, Coldfusion and Java/JSP. And ASP was just released.


I was referring to parent's claim "[in 1998] the concept of webapps must have been completely foreign".


I really like the historical note: "Historical note: At the end of the 20th century when this was written, "cool" was an epithet of approval particularly among young, indicating trendiness, quality, or appropriateness. In the rush to stake our DNS territory involved the choice of domain name and URI path were sometimes directed more toward apparent "coolness" than toward usefulness or longevity. This note is an attempt to redirect the energy behind the quest for coolness."


The best snippet for me: "Think of the URI space as an abstract space, perfectly organized. Then, make a mapping onto whatever reality you actually use to implement it."


This is a must read!

Every company I have worked at with legacy sites to port/maintain have yet to allocate any resources to this problem. No matter how often I suggest it. Now we have a great reference for our bosses and selling to clients of the importance of it.


Also see http://daringfireball.net/2007/03/blank_slate and scroll down to "Yes, even URLs are designed"


One mitigating tactic for websites linking other people's content is to create a local copy and display that copy if the original content has been removed or de-linked. Though this may not be 100% kosher copyright-wise.

But speaking of preserving old link structures, would you suggest a strategy for doing so?


On my own blog, I have gone through several movements, either from one blogging platform to another, or more recently, deciding that I liked /blog/title/ better than /blog/year/month/title/. In each case, I have been very careful to ensure that I have a sufficient set of mod_rewrite rules configured in Apache that any of my old URLs will redirect the user to the new URL.

In the case of changing structure wholesale, it was as simple as setting up `RewriteRule ^/blog/\d+/\d+/(.+)$ /blog/$1 [L,R=301]`

In previous cases, I had to get a lot more specific; eg. articles with the same title would have different generated URLs depending on the blog software, so I would need to handle that as a redirect as well.


A classic. This keeps getting more credible as it ages. Now it's been at the same URI for 13 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: