Hacker News new | past | comments | ask | show | jobs | submit login

It's funny that people say not relying on URLs would be anti-web. IMHO this causation is made only on a selection of empirical observations. In fact I think, you could also draw other conclusions from empirical observations.

Let's look at REST. Everybody is using it for HTTP APIs, or to be precise: everybody pretends to use it. Because, as many know, a REST API is only a true REST API, if it follows the HATEOAS paradigm. A paradigm which is in fact really cool. But why do we think it's cool? Because Roy Fielding found in his thesis that the (human) web is basically HATEOAS. He says the web is so successfully because of that.

But in reality... Hardly any HTTP API uses HATEOAS. In fact many popular APIs hide the HTTP stuff completely from the API consumers. (If everybody was using HATEOAS, we would never have to update the client libs, right?) Something similar goes for normal websites. Most URLs are not human readable, even HN is an example. The URLs are just numbers, there was even a post discussing that recently... Most News websites have even more complicated URLs, they are not made for humans and thus to me something like memory addresses. The GMail URLs (the webs most successful mail client) are also very funny looking, I wonder if there are users who manipulate the URLs by hand, or who bookmark their outbox.

I somehow like looking at URLs but am I supposed to edit them as an end user or draw conclusions from their look? (And is Google? ;))

BTW: URLs were interesting in the 90s for identification because only Geo Cities and friends had domains. Now everybody owns a domain.




but am I supposed to edit them as an end user or draw conclusions from their look?

Here is a data point that may be of interest: On YouTube, since links are filtered from comments, many users link to other videos by posting the "tails" of URLs - some with "watch?v=xxxxxxxx", some "?v=xxxxxxxx", and some just post the random-looking video ID part with nothing other than "see video xxxxxxxx". In other words, there's evidence to suggest that a reasonably large portion of the otherwise "computer-illiterate" have at least a basic understanding of how URLs work and will edit them manually to get what they want.

Edit: or to put it another way, there are people who, upon having made extensive use of YouTube (or possibly other sites), have been able to notice the patterns in all of its URLs, and use that knowledge to succinctly name a video, without explicitly giving the entire link. They are also implicitly teaching others about this knowledge in the process. This is a perfect example of the kind of learning experience that would be deprived from those whose browser hid the path in URLs.


Yeah and the "otherwise computer-illiterate" may also think: oh wow, this is a cool feature I should rely on. Eventhough you can just paste the Youtube URL into the comment field and get a nicely formatted link.

For me this is another evidence that the main argument is broken. Youtube is super successful but in fact it is really restrictive when it comes to hyperlinking and mashing things together.

Update: just for clarification because of the downvotes, Youtube does not filter Youtube urls.


They might've changed it with the new comments system, but I know that links of any sort were completely disallowed not long ago. Nevertheless, I still see the posting of bare video IDs in many recent comments.

(I've basically never participated in YT comment discussions. There's definitely a lot of idiocy, but it's also interesting to just observe and see the sometimes surprising positive things like this that can occur.)


I do think there should be more of a focus on URNS rather than URLs, and his could help.

URLS are pretty rigid. If tommorow http was swapped out for a different protocol what would happen?

You'd be better off referring to the article posted as Allenpike's article on removing urls (or some such).

Fuzzyness feels more natural. I can bookmark a rigid URL, but what if later it moves? I might be better off bookmarking a signature of the article (a very basic form might be Author and Title).

The search engine's have a signature of articles, and if you are lucky that signature will be matched somehow against your loose search query. The success of search engines depends upon how well they order and match against your input.


To me, the question wasn't so much "does it make Youtube more succesful", but rather "does it allow people to notice patterns in the URL and deduct things useful to them from that".


>Most URLs are not human readable, even HN is an example

Personally I find the HN way fairly readable. I mean item no 7677898 is something I can read and understand and I note similar systems are used for quite a few things in the real world like phone numbers, zip codes and passport numbers.

It's stuff like "https://www.google.co.uk/search?q=address+white+house&oq=add... that I find going on unreadable.


A confession: I sometimes edit the gmail URLs by hand. In particular, I'm often using multiple google accounts and find that after restarting the browser (or something) a tab's account changes to something else than what I wanted. So I like having the option to just change the user id - the zero based integer in gmail URLs ("https://mail.google.com/mail/u/0/#inbox"). Also, in other contexts, such as google sign in, this id appears as the authuser GET param.


> I somehow like looking at URLs but am I supposed to edit them as an end user or draw conclusions from their look?

People shouldn't be drawing conclusions from the URL (apart from the query string and the domain part, but that latter is a whole other story). The URLs are supposed to be unchanging and not break, and that's strongly incompatible with having them contain human-meaningful information. And in particular (though not exclusively) with using path-segments to communicate a tree structure for your website. Such tree structures are inevitably torn down and replaced over time on most long-active websites (especially those which are the public-facing homepage of a long-lived organisation), and the result of placing them in the URL is inevitably link breakage. Hiding the URL by default is therefore good, as it should help to prevent the user from seeking meaning in the URL or the site owner from placing it there.

And links (with the exceptions noted above) shouldn't contain meaningful information for automated use either: it's Hypertext As The Engine Of Application State, not Link Structure As The Engine...

(Tree-structured site guides are fine and useful means of navigating, by the way; they just don't belong in the URL.)


Even if the URLs on HN or various newspapers are not very human friendly, at least one can copy and save them. How do you propose doing that without URLs?


I'm not saying that URLs are superfluous. But I think their function is exaggerated. Especially when it comes to Web Apps, but even in the case of articles in which hyperlinks are really useful and used a lot. They often seem like C pointers on which you better don't do any arithmetics. But unlike C pointers, often I cannot dereference them. ;)

Around '99 I did not rely on bookmarks, instead I saved interesting articles to my hard drive because links would break so often. Even today the problem remains and I don't even dare to say whether it got better or worse.

Maybe there are smarter concepts than HTTP-style URLs that we are not aware of yet. Might be also interesting regarding privacy, because many people actually do not want static hyperlinks to their personal information that last a million years.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: