Using closures to store state on the server is a rapid prototyping technique, like using lists as data structures. It's elegant but inefficient. In the initial version of HN I used closures for practically all links. As traffic has increased over the years, I've gradually replaced them with hard-coded urls.
Lately traffic has grown rapidly (it usually does in the fall) and I've been working on other things (mostly banning crawlers that don't respect robots.txt), so the rate of expired links has become more conspicuous. I'll add a few more hard-coded urls and that will get it down again.
You should hard-code that one too.
Edit: I investigated further, and actually you're right, the problem was due to caching. It should be better now because we're not caching for as long. But I will work on making login links not use closures.
I ask because I'd love to be able to make a claim like "even Hacker News, which is written in a Lisp, managed to implement a modern password hash".
I see that newer versions of Arc run on Racket, but I have no idea if that's what HN is using or not.
I haven't seen a scheme powered PBKDF2 implementation so I'd guess that's out.
The only other expensive KDF I can think of is scrypt, but I would be pretty surprised if that's got a scheme implementation.
Of course, I guess pg could have decided to call out to the OS to run any of those functions too.
If not, what was the design goal?
If slowing down web login attempts isn't part of it, why not get a dedicated auth server and offload the crypt stuff onto it?
And if it is the goal, you could use CPU-friendly sleeps on the front-end to give increasing delays to the repeated guesser.
Hashing functions designed for speed are absolutely the wrong thing for passwords.
But I don't see the need to do the processing on the web servers.