Hacker News new | comments | show | ask | jobs | submit login
Why does HN generate unique URL's for the 'more' pages?
82 points by andrewfelix on Feb 23, 2012 | hide | past | web | favorite | 42 comments
I'm not sure if this question has been asked but it doesn't make a whole lot of sense to me. At least a dozen times I've hit the 'more' button to find the URL expired. Wouldn't it make more sense to have a page 2, page 3 etc?



It's called a continuation. Continuations are essentially closures being passed around to represent state (hence "fnid"), check the Wikipedia page on them for a proper, computer sciencey definition. The nature of Lisp really lends itself to continuation-based programming. Sometimes (like on HN) it's used in web applications. In the context of web applications, it's a different approach than, for example, REST/MVC. It has a few nice properties, such as easy state management. This can be very useful when a web application is an interface to a process (e.g. checkout in a web store) as opposed to CRUDA. On HN, it allows paging that "remembers" the order of things at the time you visited the first page. Reddit uses a similar mechanism. The obvious downside is that you have to keep track of all those states and if you lose them (for example because they expire), the user workflow is interrupted (hence the error message).

It's not the trendiest programming technique around these days but it's a useful tool in the toolkit of a good developer.


"The nature of Lisp really lends itself to continuation-based programming."

This comes up occasionally on HN. I've never seen anyone say anything positive about it.

It seems like a software detail that's unnecessary exposed to users. It doesn't make the site better, it makes it worse.


I disagree, as i look at HN several times a day i like how pages expire. It explicitly tells me that the order has changed. People are WAY too fussy about messages in their UX because we have abstracted UX to such an extent that we expect the same smooth UE independently of function. The internets expectations (and sheepishness) is making all software like that crap you get free with your camera.


I completely agree, but I wouldn't mind it nearly so much if the "expired link" page at the very least had a link to the home page.


I added a suggestion to that effect on the Feature Request thread (http://news.ycombinator.com/item?id=363) a while ago, but it slipped off into the long tail of comments pretty quickly:

http://news.ycombinator.com/item?id=1480352


Or to the next page.


Reddit does not use a similar mechanism. Here's an example of one of their next page links: http://www.reddit.com/?count=100&after=t3_q1sxk

The count is the number of stories to skip, the after is the id of the last story on the previous page.


I don't see how that improves anything. If you click that link, you still see "nothing here".


The point isn't that it's an improvement, the point is that it's completely different. It's the "standard" way of doing pagination (limit + offset), not a continuation.


That's not offset + limit; it's start ID + count, which is functionally very different from offset + limit.

If Reddit were to use offset + limit, one would frequently visit page 2 only to see the same articles they were reading on page 1 five minutes before. A good example of this issue with offset + limit based pagination is 4chan, which is basically impossible to read in a linear or sane manner.

Reddit somewhat alleviates the problem by ensuring that posts after the last one you read are displayed - you'll occasionally see some of the same posts you were reading on page one, and occasionally miss posts that "jump" to page one while you're reading it, but it'll at least be usable, unlike 4chan.


Yes, you are correct, I was merely being lazy in my description.


I'm not claiming it's an improvement.


Strange... i clicked it and it works (and i do consider that an improvement).


You're right, Reddit does not use a similar mechanism. My bad.


This reminds me waaaay too much of ASP.NET's oh-so-wonderful __VIEWSTATE.


dchest answered more of the 'what' than why

the answer to the why is news.arc (what this site runs) is the pet example of Arc, which is pg's lisp dialect focused on making lisp web apps with continuation passing style

as such, the technical purity to the idea is more important than UX, as otherwise it would mean admitting that trying to implement pretty simple web app that passes around continuations like this is a fool's errand

(actually, arc and the ideas work great for rapid prototyping but just do not work well once you reach more than all but the smallest of scales)


Aigh, you beat me to it.

pg, please fix this. It drives me batty that after a page is open for a little while the "more" link breaks. That should never, ever, ever happen.


You can kill me, but I find it somewhat ironic that the functionality championed in the Arc Challenge is directly responsible for the most annoying bug on Hacker News. :)


This bugs me endlessly when browsing HN.

I like to open and read interesting links in new tabs, and invariably the continuations have expired when I return to HN after reading another tab. I see the expiry error many times everyday. It is a terrible user experience, even if it's a interesting technical 'solution' to paging.



Insane. How can that be called an invention? That's like patenting 'Stepping over obstacles in order to avoid tripping'.


or, for a slightly less advertising-ridden interface, http://www.google.com/patents/US6205469


irony defined


I'm not sure how the patent relates to the question. The UI issue is that a user traverses to the 2nd or 3rd page of viewing, go reads some articles or whatnot, and then comes back to find that he or she cannot move onto the 4th page.

It's frustrating.


Arc server stores closures on server (see the patent for the similar technique). Links include unique ids of these closures. Closures expire. This has been explained numerous times here on HN, see the search link in my different comment.


The part that confuses me is that 'this has been discussed before' is considered a valid answer, as opposed to say 'yes it's horribly broken and no-one is willing to fix it'. The site routinely fails to return results for links it generated just minutes before. How that is not considered a bug, on a site that represents forward thinking web development, mystifies me.


Here's pg's explanation (from http://news.ycombinator.com/item?id=3098756) It's not so much that it's ahead of its time relative to hardware as it is something you do in the early versions of a program. Using closures to store state on the server is a rapid prototyping technique, like using lists as data structures. It's elegant but inefficient. In the initial version of HN I used closures for practically all links. As traffic has increased over the years, I've gradually replaced them with hard-coded urls. Lately traffic has grown rapidly (it usually does in the fall) and I've been working on other things (mostly banning crawlers that don't respect robots.txt), so the rate of expired links has become more conspicuous. I'll add a few more hard-coded urls and that will get it down again.


I think storing state on servers is always a bad idea. State should be stored in a cookie in the user's browser, letting him decide when it expires.


a) This isn't really a site representing forward thinking web-dev. If anything, some of the more valuable bits I've picked up here point at your web dev stack being less important to the success of your company than many other factors.

b) Someone would argue that using closures to represent page links is clever and forward thinking, even if the current implementation is broken.

But, yes, it is annoying and wish it could be fixed.


I gave a valid answer to the question in the title: "Why does HN generate unique URL's for the 'more' pages?" I agree that it's broken.


User experience > cute technical solution


Continuations were all the rage a few years ago, after yet another doomed resurgence in web lisp. I agree, it is completely awful, and I'd rather have semi disoraganized link order.


I've read the bits about implementation based on continuations in various other comments.

Given that:

- the ranking of HN stories likely is in constant flux ...

- and that some users might want a consistent view of stories across multiple pages ...

- and that having .../page1-, .../page2-, .../page3-style URLs might result in some user seeing the same story presented twice in their browsing, once on page1 and then again on page2 as its rank changes ... or worse, missing a story that got promoted to page1 from page2 during a session

... do the continuations as implemented on HN present a static view of the ranked stories with a non-changing ordering (as long as links don't expire)? To do this with straight URL's one would need to present different URL links at different times to different users to get different page-2 contents reflecting what was on page2 at the time they started their HN visit. If so, that would be a feature.


I don't mind seeing the same stories pop up in the feed on the next page as long as the link works. Let's be real here, these are links to blog posts, and new projects. It's not like someone's going to get killed if we see a repeated headline.


Btw, for all those who are talking about continuations:

http://www.hnsearch.com/search#request/all&q=pg+continua...


I didn't even know there was an hnsearch. Thank you!


Sounds to me that this could be fixed quite easily. Why not pass both a continuation ID and a page number on the pagination links? If the continuation has expired, simply use the page number to generate a new page?

I agree with other commenters that the current way feels very broken. I get to see the "expired page" dozens of times a day because I always leave my HN tab open and follow stories in a new tab.


I have a suggestion for a hack that would make life a little more pleasant for those who do not mind running Javascript while not affecting those who want to use the site as is. The script can be implemented by the community and hosted on Git. PG can just include the script in the arc code with just a "one line change".

The JS will just fetch the initial n pages and allow easy navigation using the more button. It will be visually very similar to what we have now. If the community cares it can become much more richer in functionality. If PG refuses, at least there r enough HN users who are annoyed by this bug that we can create a Chrome plugin for HN that fixes this. I can probably write something like this myself. But probably the community can first formulate excatly what the plugin should/should not do. Does such a plugin already exist? Is it a bad idea to solve it this way? Would the additional burden on the servers be too high to be worth it?


As I understand it, pagination is there for performance reasons: the server does not compute the rest of the page until needed. Adding a script that pre-fetches all pages would defeat this.


Patent, shmatent. What currently exists SUCKS. How many users have to complain before someone gets a glimmer of a clue that MAYBE they should try ANYTHING ELSE.


Not only that, but it seems to be paginating comments at an absurdly low count, I went back to reference an old thread and I went 5 pages in before I got tired of looking for the right comment. That and I've been bitten by "expired" links three times today alone, once while I was trying to submit a comment.


Off-topic: is HN under heavy load recently? Sometimes I see recent pages cut off at around 50 comments, with the more links. Normally this would only happen at older pages where you can no longer comment.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: