It's not so much that it's ahead of its time relative to hardware as it is something you do in the early versions of a program.
Using closures to store state on the server is a rapid prototyping technique, like using lists as data structures. It's elegant but inefficient. In the initial version of HN I used closures for practically all links. As traffic has increased over the years, I've gradually replaced them with hard-coded urls.
Lately traffic has grown rapidly (it usually does in the fall) and I've been working on other things (mostly banning crawlers that don't respect robots.txt), so the rate of expired links has become more conspicuous. I'll add a few more hard-coded urls and that will get it down again.
Over the last week the home page appears to be cached longer than the arc timeout, no doubt due to the spike in traffic. As I throw away cookies when closing the browser, I need to login daily. It's been impossible to login from the HN home page because of this. Refreshing the page doesn't help; I've had to click through to a story to be able to login.
The problem there is that we switched to a new deliberately slow hashing function for passwords.
Edit: I investigated further, and actually you're right, the problem was due to caching. It should be better now because we're not caching for as long. But I will work on making login links not use closures.
Gauche Scheme has a bcrypt implementation, but I don't know what the compatibility story is between mzscheme and Gauche. I think they're both R5RS compliant, so it should work.
I see that newer versions of Arc run on Racket, but I have no idea if that's what HN is using or not.
I haven't seen a scheme powered PBKDF2 implementation so I'd guess that's out.
The only other expensive KDF I can think of is scrypt, but I would be pretty surprised if that's got a scheme implementation.
Of course, I guess pg could have decided to call out to the OS to run any of those functions too.
Out of curiosity, are there any places where the hn codebase would be smaller if you used full continuations instead of just closures, allowing code akin to what I quoted from the PLT paper?
Using closures to store state on the server is a rapid prototyping technique, like using lists as data structures. It's elegant but inefficient. In the initial version of HN I used closures for practically all links. As traffic has increased over the years, I've gradually replaced them with hard-coded urls.
Lately traffic has grown rapidly (it usually does in the fall) and I've been working on other things (mostly banning crawlers that don't respect robots.txt), so the rate of expired links has become more conspicuous. I'll add a few more hard-coded urls and that will get it down again.