MongoDB's limit(x) and skip(y) are a shitload nicer than most of Microsoft's ideas about pagination. It was only in SQL Server 2012 that they came up with "OFFSET" instead of "google it".... http://stackoverflow.com/questions/2244322/how-to-do-paginat...
The thing I don't know about, though, is the "database is almost always the most constrained part of a solution". On this simple site search, we tend to notice the apaches taking a lot more cpu than the databases calls. I guess when our site gets bigger, you'd imagine it is easier to scale out the apaches than the database (with sharding), but we would still have a lot of room to improve the mysql layer anyway (memcached for example).
I assume the alternative is paginating server-side, which wastes some network bandwidth and processing time on the server.
The vast majority of pagination procedures occurs on the final rendered set. Meaning you've done 100% of the work on the database server to retrieve n% of the product. And when you come back for page 2, you're again doing 100% of the work for n% of the product. Pagination is not a SQL shortcut, and the sole "savings" it provides is network bandwidth between app server and database, which is seldom a limitation.
If you're keeping the result server-side, how are you storing it between requests in a stateless context like a web app? If I ask for page 1, and you get all the pages, do you cache them server-side? That would add a ton of complexity. I suspect your app is sending the first n rows to the client and throwing the rest away.
Having a dataset your web server can crunch and cache is just one use case.