Hacker News new | past | comments | ask | show | jobs | submit login

Sure, revocation lists are relatively small. But they need to be available to every server (replication), be proof against server/service restarts (durable), and checked with every request (highly performant). So, a good revocation list effectively requires a database. Not a trivial thing to implement yourself, and a weighty requirement for an otherwise stateless service.



JWT are even smaller than their size since you can revoke them by hash (although you should really just revoke by user ID in most cases).

Your tokens should generally have a rather short lifetime - then you can keep the entire relevant window of revocations in memory.

The implementation is not trivial though, that's for sure.


Hmmm, using a database (eg PG) for the authoritative information, with memcached in front sounds like it would be practical for most uses.


At which point you should probably ask yourself: "What value is keeping all of my state inside this token providing me?"


Probably not. If the Pg instance is replicated, as indicated above, it'll be challenging to keep the Memcached copy in sync. In other words, you can't just use the caching feature of your ORM, you'll need another piece.


Thanks, that does need further thinking about. :)


Postgres is not a good solution for this kind of data. I'd use Redis, but maybe there are even better products.


Could you just have per-server tokens? Wouldn't a single client tend to hit just one server anyway?


Wouldn't a single client tend to hit just one server anyway?

No? Maybe? It depends on your load balancer. Assigning a client to a specific server is "sticky sessions". Many of us don't want to tie a client to a specific server and prefer a completely stateless 12-factor-style mechanism where any server can serve the client and stateless tokens provide a mechanism to achieve this.


Not to mention the challenges with multi-region replication needs... to do this for every request along a server-server pipeline adds more latency still, since each request to the db means potentially 2-3ms on top of more complex requests, which all adds up.


> and stateless tokens provide a mechanism to achieve this

without revocation. What's wrong with tieing a client to a server, or co-located server? Either they are close enough to share tokens / sync fast, or not?


What's wrong with tieing a client to a server, or co-located server?

Nothing, if you can get away with it. What do you do if your server dies or is overloaded? The 12-factor patterns came to be for services running on ephemeral hosts in cloud environments. Stateless servers mean you can seamlessly serve requests from another server without problem. Sure, you can store the sessions in a shared resource (redis perhaps?) but this complicated failover and redundancy and may add latency.

Maybe this isn't an issue, maybe it is. If you don't need or want that, then just use normal sessions, for sure.

Revocation can be handled (although admittedly not as well as with sessions or stored tokens) through short TTL's and refresh tokens (which are stored, but only need to be looked up when the stateless token expires). Its not perfect, but its often a good enough tradeoff.


What if you are running dozens of services each specializing in its own domain? Do you proxy each service through a pool of central webservers? Or do you just stand up a central auth server and have each service trust that auth server?


The latter makes sense to me. Auth is a cross-cutting concern.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: