Continuations used to be used more on news.yc but it was trivial to flush the cache by making a few requests from a single client causing everyone's sessions to be hosed. That causes a switch to URL parameters for lots of things; why not all things? It's a fundamentally flawed model that I've never seen pg defend. It's certainly not scalable. Either the server stores info for all N clients, which fails for large N, or you give each client their own piece of info to store.
Let's take http://news.ycombinator.com/threads?id=pg as an example. When I visit that I get dealt 75 fnids. They're different every time I go to the page. So I'm using a new 75 slots in the cache every time. It's little overhead in CPU or bandwidth for me to get the page but it has a big impact on the other users of the system if I flush their fnids out of the cache before they try and use them. I first pointed this out at the end of April 2007. In response, pg cut the number of fnids used, but they've crept back in.
The technique's flawed, trivial to exploit, and should be dumped.
Here's the last time I pointed it out. http://news.ycombinator.com/item?id=18083 pg made the post dead and emailed me about publishing a DDoS. Yet here we still are, using a broken concept.