How about this as a fix that retains the current design and functionality.
As well as including the fnid in the pagination links also include a pg=1...n item that is only used as a fall-back if the continuation has been garbage collected. That way you retain the continuation design that enables the user to see the list continued in the order that it was on the last page but if the continuation has been collected it takes the most current ordering and returns page n.
If I had the time this afternoon I would have a look though the code to see if this would work but unfortunately don't. Is there anyone here that is familiar with the code base who could asses if this is a simple change?
It doesn't reflect well on YC that the site is so embarrassing technically. I will now force a speedy fix by suggesting that the problem lies with Lisp.
I know you're kidding, I'm just not sure if you're half-kidding or whole-kidding.
Would YC be more successful if HN was technically improved? Certainly not. Would HN be more successful? Hard to say, but probably irrelevant -- HN is the social focus of the startup community. This has rewards for YC, but comes with taxes as well.
Regardless, there is no trend away from HN visible today, and the cobbler has other work.
:-) I don't think pg needs to prove that it's possible to explicitly carry state from request to request using Arc. You can do that in any language. Continuation passing is a cool hack, though I wouldn't use it myself on a production web site that needed to scale. And I do use Scheme.
This bug doesn't only happens to the homepage, but also to old threads, where when you click "More" at the bottom of the page to read the next page of comments, you get such error message instead. If you wait a bit, go back to the previous page, refresh and click on "More" again, it goes away, but it is certainly annoying.
There's no need to "wait for a bit". Just go back and reload the page in order that you have fresh fnids that stand a chance of still being in the cache when you click the link.
Problem? I thought this was a feature. It happens when the user does not interact with a page for some time.
I thought HN has it so that the users are forced to refresh the screen and get the latest news.
Am I wrong here?
Ha. Never thought of that. But no, it's not a feature in the sense of something that was designed with this goal in mind. It's a way of not having to design and implement a whole url scheme, and instead being able to easily run arbitrary code in response to a link being clicked. This architecture, however, has the undesirable (expect apparently to you) side-effect of occasional "unknown or expired link"s.
Couldn't pg just toss additional GB of RAM int the server and dedicate it to continuations cache? Just extending the time from few minutes to few hour or days should be enough.
It seems to stop the second page becoming stale, forcing you to refresh and get a new copy of the front page first preventing you from viewing outdated content.
While I agree it's a feature, I'm pretty sure it works the opposite way. The idea is that you want the second page to be "stale": if you've just spent time reading the front page, when you click to the second page you should be getting all new links.
This comes at a cost though, because the server must cache all the possible second pages that might be requested. So HN makes a compromise and caches them for an hour or so. If you wait an hour, it will have been evicted from the cache and the error will appear.
I like your explanation of this feature. I guess it rarely works for me because I open dozens of tabs when I start and some of them inevitably become stale if I do something else in between and then go back to read some hacker news discussion.
I often and up just closing discussions having a "more"-link since it tends to frustrate me one way or another. I wish there was an option to disable "more"-links entirely (or is there?) I can see how it's useful for small screens but full-size on a 27"+ screen or in portrait mode (my preference for browsing the web) I think seeing 500+ comments on one page would be totally fine :-)
(Ideally, after reloading it, new submissions would be highlighted, one can only dream)
Yes, it is a feature, a feature that leads to a terrible user experience, so while it may not be a bug due to it being by design, its still a really big defect.
In my personal instance: I would be much more of a contributor to HN If I didn't have this problem on a daily basis.
Usually it happens after I have tabs open, go get some food, come back, and I need to start my browsing on HN all over again as a result, usually this only happens once, because I say fuck it and go elsewhere.
What are the pro's that make this con worth it? I'm not seeing it, its just a big massive pain in the ass.
Man, really, a feature? You think the designer of this forum decided to break all the links on purpose? That's some strange design decision.
Wouldn't a minimum fix be a simple htaccess redirect for failed links?
As page refresh usually works then a redirect to the referer (ie send you back to the page you're on) would seem like it should work most of the time: of course it should preferably give some feedback to indicate what has happened.
I don't know the inner workings, but from what I understand - data gets invalidated by its "key" and new pages with new keys are generated so that the page content remains "fresh" - so no htaccess rule is going to help you with a problem like that.
It sounds like there is some serious changes required to fix this so it behaves like conventional pagination, which I'm guessing is why it has not been fixed.
No, it's a bug, a design flaw in using continuations. See my other comment. If it was by design then the links wouldn't continue to work for longer at quiet times on the site; they do simply because old continuations haven't been flushed to make room for new ones.
What about the app automagically saving the linked content in a cache and if the link ever goes dead, or content vanishes, to present the cached version to users?
I'm unclear what you're suggesting but your up-votes suggest I'm alone. :-) The fnid (function ID) is the index into the cache storing the continuations. It's this cache that's dropping `old' entries as new ones are added. Depending on how busy the site is, that takes a varying amount of time.
Ah, OK, it's not top for me and there's no indication that stories are sorted by votes score/ most upvotes/ balance of upvotes or whatever it was you thought put him at the top.
I always assumed that it was datetime sorted on first-level comments as there was no indication to suggest any other sort and IIRC my new first-level comments enter at the top.
No, have a skim of the top-level comments on this post or others. It's not simple time-based ordering, popular comments get pushed to the top. Freshness is a factor, else all new comments would be near the bottom with un point, but as all things age it becomes nearly all score. The source for the forum used to be available in the Arc language tar file, perhaps it still is.
Keep in mind, as pg confirmed the other day, HN is still running on a single core.
pg/HN has always been good at prioritizing. (And at identifying his interests versus yours, which may often but not always overlap.) Also, there are ready workarounds, if you must, e.g. load the linked page of interest into a new tab before it expires.
This is something people liked about HN, including early on. That pg would make useful decisions and then not take / cave in to cr-p about them.
I'll live with the expiring links, if and as it makes other parts of managing HN easier.
P.S. As I reflect, is some of the increased discomfort and agitation from users on this point due to an increase in mobile browsing, where such user-initiated workarounds face a more cumbersome UI? (Not all, but some.)
It was a lot worse in that past but pg cut back on the number of fnids being generated, switching to more conventional methods. There's still quite a few around though and obviously increased traffic, meaning more new fnids to store, puts pressure on the cache.
He means Hacker News' next page link. The URL you get by clicking it is unique to you and it preserves order of stories when you navigate from one page to next, but it has short lifetime.
Yes, that's right, but if you look at the page source you'll see fnid used elsewhere too, e.g. "add comment" IIRC. It's annoying to have typed a comment, hit add, only to find the continuation has been flushed from the cache at the server.
(The poor formatting of sed is caused by the post being made dead; it triggered a re-submission of the content causing corruption.)