Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why are developers so stingy with “rows per page”?
72 points by quiffledwerg 10 days ago | hide | past | favorite | 73 comments
Why the heck am I usually allowed only 20 or 50 or maybe 100 rows per page with whatever I’m looking at?

Is there some sort of shortage of CPU time?

Why not all me to see 500 or 1000 rows per page so I can scroll through the things you’re trying to sell me without needing to press “next” every 5 seconds?

Developers please, there’s no shortage of “rows”. Give us more.






As a non-designer developer, I'm going to say this is dictated by UX designers and has no technical merit. My preference trends towards "put about 10k rows into the browser and let the user use ctrl+f to find what they're looking for" which is probably also wrong for all sorts of reasons.

Only thing worse than pagination is infinite scrolling.


Bonus points for infinite scrolling where results that aren't on screen are removed from the DOM, making ctrl-f entirely useless. Looking at you, Twitter.

Extra bonus points for contacts page (including careers) only reachable through the bottom of the page, which is continuously pushed forther bottom by infinite scrolling.

Twitter hates their website. And it shows.

Is there a good way to handle this without all the overhead of the full DOM object for each invisible thing? Some way to keep searchability from the native client while changing the amount of data stored in memory?

I don't personally think there is a good way to handle this. The best compromise is that this type of infinite scrolling only comes along with a fleshed out, well working and prominent search and filter feature.

In the case of Twitter, ctrl-f would allow me to search through my bookmarks, my feed or the replies of the tweet I am looking at. Can the Twitter search do these things? If it can, it's really not obvious, and if not, that sucks because a lot of usability is going out the window with no replacement.


No, you'd need to handle the search. Which is something I see more and more sites doing.

drives me mad too, but for infinite scrolling you do basically need to do this if you don't want your site to be a lag fest/memory hog on older hardware. of course twitter runs like shit on older hardware anyway

At least they're saving on memory, to be fair.

Pages are very useful when you don't want to dark pattern users (see: social media).

We could make very, very long scrolls of text in real life, but books are more practical.

Similar thing here, not everyone uses "Ctrl-F". Heck, many users don't know how to search (try to ask non technical people, you'd be surprised) and on mobile browsers for example, frequently there's no way to search (or no easy/easily discoverable way to search).


If you already know your search term, an actual search feature would be more useful. Scrolling and pages are for browsing where you need to see each item to make a choice.

Infinite scrolling is crap, but 10K rows in a result is going to feel a lot like infinite scroll.


Sometimes you want to see all results and then ctrl + f multiple search terms. You wouldn't want to reload page as you try different keywords. Sometimes I don't even know search terms until I have scanned through a few rows.

In our non-web app we do this. We have "special" grids for this which wraps our query as a subquery, does a count() so it can accurately draw the scrollbar, then uses limits to pull only the visible rows from the database.

When the user add filters on columns or sets sort order it's added in the where or order by clause respectively of the outer query.

This way we can easily show 100k+ rows (like, all their invoices or orders), the user can scroll and filter to their delight etc, all super-responsive.

Surely it shouldn't be much harder to support on the web if one wanted?


> put about 10k rows into the browser and let the user use ctrl+f to find what they're looking for" which is probably also wrong for all sorts of reasons.

I actually use a UI like that, and this particular one is not fun to use. You have to wait 2-3 seconds and experience unresponsive UI untill all the 10k rows are loaded

Having said that, I think this approach ("put about 10k rows into the browser") could still work, it just needs more love.


Navigating to page 6 of a result set, for example, generally takes a bit longer than 2-3 seconds. Just getting to page 2 often takes longer.

Totally agree with you and the author of the post.

Part of the problem is the lack of filtering (or even sorting) on the client side to reduce the information overload to something you're willing to wade through, which would trade off results returned for query complexity (of course, this also assumes its been programmed correctly...).

We lose a lot using the 'search box only' query interface that google popularised.

If there is filtering, sometimes you can see how poor the integrity and/or quality of the data is due to some weird-ass categorisations, typing mistakes etc. etc.

Attempting to limit searches on laptops to a specific screen resolution did my head in on pretty much all of the major manufacturers sites.


But if, most of the time, what’s on page 1 is all you need, paying the extra time every time so that things are faster in the rare case you need page 6 is a net loss, both in user waiting time and costs for the site (often a big one).

Also, with smaller pages, site owners can track whether many users need page 2, 3, etc. and change page size or reload the next page’s data accordingly.


The other thing worse than pagination is broken pagination.

The initial query of all rows might be slow, but once I have a snapshot speed is not the issue. I regularly do ctrl+a ctrl+c alt+tab to Excel and ctrl+j to run a program that gives me the result I am looking for.

Where I have control over the back-end, I like to provide static dated snapshots of queries.

I can be stingy but I would look at: Time to run the query vs. Value of having all the data.


Maybe we can also discuss application that don’t involve very large numbers of elements.

100-1000 elements for example could be accommodated on the same page. 10k make me think about data grids which may be improved by filtering on multiple columns for example.


Server admin's gonna need an additional fire extinguisher for that.

EDIT: Guess you guys have no humor on this page


From a UX, perspective, Nielsen Group is a good reference: https://www.nngroup.com/articles/infinite-scrolling/

For your question specifically, the following explain it well I think:

..locating a previously found item on an extremely long page is inefficient, especially if that item is placed many scrolling segments down. It’s much easier for people to remember that the item is on page 3 than it is to gauge where the item is positioned on an extremely long page.


The whole point of the web is that it is dynamic. There is no guarantee that a search result that was on page 3 last week, to use your example, will be on page 3 today. Further, in many cases, bookmarking page 3 of a search result produces a null result when accessed, because the web site needs you to re-enter the original search term.

I would say people probably want to know where it was in a single session, not in 3 weeks.

im sorry do web results have 3 pages?

how much time do you have to browse a bad search hit?


I love this comment. Not for why you’d think though.

I believe it shows how close the nodes for “web” and the number 3 have got in our heads.


When was this written? Ctrl+f in the browser beats and pagination I’ve seen. By now everyone is well trained to scroll through endless feeds.

From UX perspective I hate pagination and I think in 2021 it’s an anti pattern.

From database and server load perspective, I understand it.


I cannot recall where I saw this, but I recently encountered an infinitely scrolling list where a simple interaction (akin to a checkbox) would "bookmark" or "pin" the item for that session only. It created a very thin UI element with "next" / "previous" arrows and a count of bookmarks, similar to a Find Text interface. On hover (immediately, no hover delay) or touch-hold for next/prev the title of the item was displayed in a tool tip. It felt very natural and snappy, but for the life of me I can't find it again.

Pin at the top in a fixed ui banner-like element?

It depends a lot on the content. I feel most of the time, it's done for the wrong content. Something like hotel rooms or purchasing cameras are more suited for pagination. Something like Pinterest is better suited for infinite scrolling.

what is their source? Because i think it's a matter of page length rather than page number. If you place 1000 items in 30 items per row, people likely to have no problem finding it. We are creatures of 2d, not 1d.

In my experience, it's the query time. Maybe it should be faster, but practically, with all the tech debt that goes into these things, it's not. I've actually experimented with different sizes for a social media app (meaning it queries total reacts, shares, etc) and 15 was close to optimal.

Latency is not an issue, and neither is displaying complex information. You can pull and display 400 dummy items no problem.

Loading time for the first "page" is extra valuable - you'd want that in 200 ms or so ideally. So one trick is load 3 items, then 30 or so.

Also you have to look at actual cost. Perhaps loading an extra 15 queries on the home screen costs $0.0004, but when you have 1m daily active home page uses, that's an extra $400 per day. In unoptimized pieces of code, the cost could well be 20x higher.

If you have a very high average user value like Jira, that's fine. But for say, a free manga site or something like imdb, you want to shave off costs wherever possible.


None of that is valid excuse for not giving me the option of 300/500/1000 rows.

Costs are, especially on free sites. Also, giving you more results may decrease performance for others.

There’s also the fact that the best UI gives you want you want, not what you say you want.

They could have tested this, and have seen that even users who say they want 1000 rows never look past the 100th or often turn away before the longer page is loaded (That’s the kind of analysis that can lead to lazily loading more data/infinite scroll)

They also may think they already have site functionality that is better for the use case users say they need 1000 rows for, such as an advanced search facility.


Because there's no point in loading 500 rows from the database if 90% (made-up stat) of people don't read past the first 10.

Sure but there's a middle ground. 100 is probably reasonable (if we're talking about products).

10 is just stingy, as OP said.


Some comments suggest this is as a result of A/B testing, which may be part of the story today, but as someone who built websites that had to deal with returning large resultsets from time immemorial (i.e. pre-dot com bubble), it absolutely started because Netscape would simply blow up rendering tables larger than N rows (where N ain’t very large) and because partial rendering didn’t work very well with table based layout. The former meant you could crash a viewer’s browser while the latter meant anything more than say 20 rows would take eons to load over a dialup modem.

This all changed with the advent of infinite scrolling enabled via the whole ajax revolution, but this was (is still? I haven’t written front end code in a decade or so) difficult to get right.

In sum, this happened because of technical difficulties on the front end side (browser limitations or code complexity), and probably just stuck as convention. Perhaps some nice analyst then A/B tested the arrangement and found that it was happily optimal.


Similar issue with smartphones. They used to basically crash when faced with a large table. I assume the low end devices will still struggle.

There is a psychological aspect too. Show people 10 results and they'll actually look at them. Show people 100 results and they'll not only quickly scroll through them but also skip the top results, which is counterproductive.

The big sites have definitely AB tested changing the results per page and have stuck with 10 for some user metric driven reason.


- Query time: Developers are very squeamish about how long SQL queries take

- Payload size: 1000 rows makes the payload huge which adds even more latency

- Frontend performance: More modest machines will struggle to render very large dynamic tables of rich content.


Really none of these things should be a problem.

- Presenting your bread-and-butter data shouldn't require complex joins or expensive queries. If it does, you are doing something wrong.

- You can serve the text of a thousand page book in just a few hundred Kb. If your payload size is large, then I guarantee it's not the text on the page that's at fault.

- Likewise, no computer (not even one from 1996) will struggle rendering plain HTML. Any performance problems your website has are problems you've added yourself.


> Payload size: 1000 rows makes the payload huge which adds even more latency

I recently built a feature at BigCo. Like all our stuff, we have the default page size be a meager 10 rows, so as not to upset anyone. Because everything is always 10 rows. However, the frontend is hard coded to always request page sizes of 1000, which results in a payload of less than 100kb. Or at least, that was the plan until an "architect" spotted it and got upset because 1000 rows is clearly beyond the pale. SMH


We went a little bit on that road ended up negotiating 50 rows for desktop and 30 for mobile.

Sensible UI wise, but if the links are shareable it will just confuse users in the end.

It was a link behind login, and not expected to be shared. since it's like a list of the user's particular content

>> 1000 rows makes the payload huge which adds even more latency

1000 rows of json is not huge

It’s typically maybe 300k which compresses down to under 50k


Because Tables with many rows are SLOW.

I have one with 6K rows, 300ms to insert into page on paper (console.time(1); console.timeEnd(1)), but in reality browser freezes for 3 seconds (1.8s style, 900ms layout, 300ms update tree). Freezing goes away with position: absolute, but it still takes 3 seconds to show up after .appendChild. I tried replacing Table with Flex divs, even worse speed.


Have a look at https://github.com/jpmorganchase/regular-table

Just a <table>, virtualized - here’s an example with 2 billion rows https://bl.ocks.org/texodus/483a42e7b877043714e18bea6872b039


Every time I point out big html tables are slow someone suggests virtualized one. The problem is its not real, cant quickly search for contents of last row with ctrl-f after loading the page.

I rather wait 3 seconds or even 3 minutes than click through results for 10 minutes. Perhaps we should default to 10 rows and give user option to increase number of rows on the page.

    table { table-layout: fixed; }

does nothing in Chrome for the speed of style recalculation and layout update

You also need to set explicit widths for columns

My bank does that. Want to print your monthly transactions? There you go. Oh no, it's just the first 10 rows with infinite scroll. Scroll, again and again and again, just so you can print to pdf.

The worst i ve seen is in Bluehost's domain management pages. An unsorted list of domains with infinite scroll where you have to scroll, scroll and scroll hoping that your domain will be in the next block that pops up (because, of course they keep all your long abandoned domains in the list). And they do it again in their DNS Zone file editor. It will only show you the first 10 or so lines of each section, so naturally you think lines are missing and try to re-enter them. The comparison with the plain old CPanel interface is mind boggling. I have complained, they don't listen

Progress!

I wonder how i can start a campaign to legally ban infinite scrolling


I like the idea of geared pagination https://github.com/basecamp/geared_pagination which is another gem out of Basecamp.

Their philosophy is to show less initially but as you start paging through additional pages it'll show more results per page until you hit a maximum amount. For example you could get 25 results on page 1, 50 on page 2, 100 on page 3, etc..

It's a happy medium. Keep your initial result minimal and focused but if a user wants more then keep giving them more in an efficient way (less clicking).


It's interesting to see this. I feel the same way, hate card-based layouts and generally want more information density (i.e., HN or old Reddit) as opposed to what's often done now.

I intentionally put all of my Elixir screencasts on a single page without much whitespace on Alchemist Camp (https://alchemist.camp/episodes). I can't think of a single employer who would have been cool with that design choice, but I frequently get emails and comments from people thanking me for it.


I'm positive this is usually due to convention, not a conscious design decision. In other words: websites give 20/50/100 row pagination options because other websites give 20/50/100 row pagination options.

There is no meaningful implementation difference between providing 20/50/100 options and 20/50/100/500 options, with 20 as the default in either case.

I would be very surprised if this decision is even discussed for more than 1 minute on most projects, let alone if any user research is done. But I'm happy to be proved wrong!


If you think that the Google search results page has that number of results per page as a result of cargo culting or tradition or as some sort of random number selection, I have bridge in Brooklyn to sell you :-)

Google, probably not. Majority of online retailers, alex_c is probably correct. Same with many sites' internal search.

On the web it can definitely become a performance constraint, unfortunately. Frameworks can fold under even this number of rows if you're, for example, re-rendering your entire VDOM tree on every keystroke in an input field. This part can be optimized around fairly easily if you know how, but then the next barrier is number of DOM nodes (specifically, the layout calculations that may have to be done on them when even something seemingly unrelated on the page changes). Depending on the complexity of your rows and what else may be going on on the page, you're probably still confined to the triple-digits if you want smooth performance.

Source: I spend a couple years wrestling constantly with these sorts of problems in a React-based toolset we were developing.

One strategy I've used before to improve the user experience and help people miss Ctrl+F less is to load the full data set into memory, even if only a small slice of it is actually rendered at a time, and then do all paging/searching/sorting/filtering client-side. JavaScript has no issue handling six digits of items this way, and it keeps things really snappy.


If your framework can’t display 1,000 rows then it’s a bad framework or you’re programming it wrong.

You skipped over what I said:

> if you're, for example, re-rendering your entire VDOM tree on every keystroke in an input field

> This part can be optimized around fairly easily if you know how

The framework can render them, just not update them in the naive way (so in some sense, yes, it is possible you're "programming it wrong").

> the next barrier is number of DOM nodes

The other problem is the browser itself, not any particular framework.


If there is a shortage of cpu time, then please use pagination instead of dynamically loaded infinite scrolling.

I hate how so many websites manages to break the back button.


1000 products could mean 1000 images to show and 1000 templates to render and that may kill your browser (definitely true on a phone), and significant slowdown even on a desktop/laptop. The solution is simply to load more data as you scroll. This should be fast enough for you not to even notice it.

Yes, search facilities on pages are usually not great, it's easier to show the raw data and use browser search...

CPU time is cheap, human attention is not.

I would not assume most users want a lot of choices. Perhaps choosing from a list of 5-10 options is much less intimidating than choosing from 100. If someone clicks next page, then maybe the calculus changes.


A good search ui should allow you to specify number of hits, imo. But 1000 would present technical problems. I'm most experienced with React and it would be difficult to render that many rows of anything in a useful way.

Additionally, depending on how your backend is designed and your choice of ORM, sometimes requests can be slow. In those cases it probably makes sense to design a UI that encourages users to rely on search and filter controls vs giving them a large list.


>> React and it would be difficult to render that many rows of anything in a useful way

That’s not true it’s dead simple to display 1,000 rows can you explain what the hard bit is?


It's easy with static content, but React tends to run into performance issues with long lists like that. Generally the way of fixing it is virtualize the list. Basically check the window scroll position and only render rows that you need to, instead of the offscreen rows. Twitter does this well, but I'm not sure you'd be able to Ctrl+F for text that isn't rendered.

(I am not a real lawyer) but I think it's just little more complex to program. CPU time issues don't matter much, we can design around that.

If we pull, say, 1000 rows in the foreground, you have to keep in mind when designing UI that you have to wait for data. E.g. you have to design some sort of indication that data is not yet 100% loaded if user tries to search something in page.

Another point is that (maybe, I have no experience in UI design) that users favor fast loading page, even if it only has 10 rows..


It is actually easier to just dump the whole set and put up a spinner, so I'd say no, it is not because of ease of coding.

It often is needed when you have a set with a million rows. You have to show the end user what the data will look like, give them a sampling, then help them filter and sort it down to what they really want.

That being said, if you show 20 rows when there are only 85 in the set, that is just a silly UX decision. Add a scrollable grid if needed, but the performance difference in returning a couple dozen vs. a hundred is fairly meaningless these days.


If there is advertising revenue involved, then the pagination is all about hits. Each page load is an opportunity to serve more ads.

Depends what you're displaying, js frameworks can struggle to show 1000's of rows smoothly, especially if a row is displaying computed data.

So, a lot of the time, developers will just limit the page size, in order to guarantee performance on majority of devices...


It's mainly out of respect for the user's bandwidth. For example, if they're on a 2GB phone plan, then 500 records could drain a significant % their monthly allotment.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: