

How Does HackerNews Pagination Work? - wpeterson

I'm familiar with traditional SQL list pagination (limit/offset) using a page number query parameter.<p>Hackernews seems to generate a token for the next page that has a limited cache/lifetime.<p>Is this a common design pattern?  Where can I find more information.
======
pg
The code that generates list-structured pages spits out a page of n items,
then saves a closure that will keep going if asked. Since closures can't
conveniently be written to disk, you have to gc them after a while or you'll
have a memory leak.

It may be overkill to use this approach if you just want to generate the
frontpage, but the advantage is that it's very general. You're not limited to
displaying a range of items stored in a list somewhere; you could be
displaying things you're computing on the fly.

~~~
wpeterson
That's fascinating.

Is the stored closure working on a snapshot of potential articles or does it
have a state-less algorithm that can generate next n articles?

People complain often about traditional offset/limit pagination, but this the
most interesting alternative I've heard.

~~~
bmm6o
> People complain often about traditional offset/limit pagination

I haven't heard any. What kinds of complaints have you heard?

I appreciate the generality and elegance of the closure approach, but the
links expire far too quickly for my taste. About once a week I read a page
slowly enough that the "next" link is dead by the time I click it.

~~~
wpeterson
I was working on a caching problem and it reminded me that caching requests
for a resource with standard pagination sucks.

You cache Page 1 with some set of objects, by the time you cache Page 2 you
have Page 2 of a different overall set. It's inconsistent and wonky.

I was hoping there was a more clever way to tease out an elegant solution
here.

