
Revoking JWTs (JSON Web Tokens) - brokenwren
https://fusionauth.io/blog/2019/01/31/revoking-jwts
======
rdegges
This is a good article! fusionauth looks cool =)

Here are some things to consider though (I work on this problem a lotttt):

1\. Do you (as the app developer) really care about security that much? If
not, just go with local validation on your JWTs and understand that there is a
window of time, whatever your token lifetime is, that you will be at risk for
compromise and abuse. If you do care, continue down the list, if not, just
don't worry about it.

2\. So you care about security, yey! Now, your problem is this: what level of
risk are you OK with? If you're ok with NO risk, continue reading. Otherwise:
set your JWT expiration time to the amount of risk you're ok with. EG: 1
minute, 5 minute, 1 hour, 1 day, etc.

3\. OK! So you're REALLY serious about security and want no risk. Now we're
talking! You really only have one choice: validate your JWTs centrally to
ensure that a user hasn't been deleted, token revoked, etc. But now you have
to decide the best way to do this! Here are your options:

A) You can validate tokens centrally on every request with the IdP: this is
basically the same thing as normal old session management: you check the
session against a DB to validate it.

B) You create a cache of tokens that are blacklisted (aka: a blacklist), so
that you can do an O(1) lookup to see if a token is expired. The only problem?
This is usually centralized, too! So no real benefit over option (A) except
that it's a bit quicker (if you don't make mistakes w/ cache invalidation,
etc.).

C) You do what this article talks about: you distribute your blacklist
directly to your edges: one way to do this is with webhooks, websockets, or
really just about any method of syncing data between places. This is a great
approach, but unfortunately, a lot more can go wrong: what if your webhook
doesn't get processed quickly enough? What if your client breaks and stops
being able to accept webhooks? What if the webhook server stops firing or gets
overloaded?

In my personal opinion, (A) is the best choice, because it's the simplest, the
hardest to get wrong, and can always be cached to speed things up. The only
problem with (A)? The only problem is that if you're doing it this way, why
use a JWT at all? Why not just use a normal old cryptographically signed
session ID? Not much benefit, IMO.

~~~
jiveturkey
brilliant reply. you covered it top-to-bottom. usually people will gloss over
or neglect something.

You didn't cover the case (2.5) where you make a best effort to invalidate
immediately, but don't need _absolute guarantee_ of that as long as you know
it will be eventually consistent, where 'eventually' is bounded. This
relaxation of requirements can be liberating.

Didn't read TFA, but another way to distribute the blacklist to the edge is to
return it along with other data. This isn't going to be a portable library or
subsystem, if it works at all it depends very much on your arch and it will be
custom.

> why use a JWT at all

because the JWT can carry other information, eg authz, in a portable way and
that can be consumed by other backends. the portability of the JWT may also
allow for much easier debugging. why _not_ use a JWT? there's no
_disadvantage_ if you just process it like a normal signed session ID.

~~~
rdegges
The only disadvantage is that:

1\. You are now passing large blobs of data over the network more frequently
than just a signed session ID 2\. If you want to ensure consistency of your
data (for security purposes), that extra data embedded in the claims of the
JWT are useless since you need to query for them for freshness after
validating the session anyway.

------
dwaite
The strategies tend to be:

1\. shorter lifetimes with no other revocation system. This works well if you
go back to a central party which has the actual business logic/state to decide
whether to issue a new token

2\. token introspection API, possibly with caching to reduce network
calls/user latency

3\. API-based blacklist with invalidated token identifiers (either JTI or SID)

In the first two approaches you are trying to prevent state from being pushed
out to the apps at the edges.

The third approach is what I took with Distributed Token Validity API, which
was basically a distributed system (via state replication or fetch + cache) to
move the minimal state needed as close to the app as feasible.

~~~
dwaite
FWIW, I discourage the use of Webhooks (mentioned in the article) because they
aren't reliable. An application cannot easily detect if a logic or network
issue has caused it to stop recognizing invalid tokens.

~~~
brokenwren
I'll add on to what @robotdan mentioned. A distributed cache is basically a
Webhook approach to pushing cached values around. If your Webhooks (or
whatever you want to call them really) are transactional, then the state sync
is handled and guaranteed by the nature of events and the server generating
them.

------
tracker1
I think it really depends... I don't know that there is a perfect strategy
here, but even reducing to a 5-minute token with refresh, I'm not sure the 5
minute window is that bad. And if it is, there are more serious issues at
play.

One of the advantages of JWT, especially with public/private key signing is
that you can verify a token without a potentially long network/service request
or lookup adding latency to every API request. And if one API passes down to
internal APIs it's even more of an impact.

In the end, pick your poison, but if you want to expire them, may as well
generate an identifier as a token/cookie and lookup against Redis, or another
session database/table for the identifier itself. Having the data in the
payload doesn't do much good if you're validating against another service for
every request anyway. If you want broader replication Cassandra or a similar
database with nodes for replication near each server could help mitigate
things, again no need to include the JWT payload in the token at that point
though.

Using hooks to propagate revocations is almost worse, because it's fragile and
may see misses in practice.

------
BryanGiese
My friends argue about the best strategies for this. Will send to them to
stoke the fires.

~~~
robotdan
Many prevailing strategies simply persist the JWT, this works, but is more
difficult to scale and sort breaks the portability idea in my opinion.

Once you persist it - you might as well be using a session. I suppose it
depends on your scale - maybe with just a couple of thousand users it would
work ok to store every JWT issued for their lifespan.

~~~
jakelazaroff
Are people really running into such terrible scaling issues with their session
database that the solution is to ditch it and use an entirely different
authentication strategy? It's not like we haven't been scaling databases for
decades!

I agree that persisting the JWT means you may as well be using a session, but
my takeaway from that is that you should just use a session, not come up with
more weird workarounds to justify JWTs.

~~~
brokenwren
Randall Degges has a blog post on this. It's pretty good actually:
[https://developer.okta.com/blog/2017/08/17/why-jwts-suck-
as-...](https://developer.okta.com/blog/2017/08/17/why-jwts-suck-as-session-
tokens)

------
victor106
This is very informative.

FusionAuth seems like an amazing CIAM for anyone to use. I will definitely
check it out.

Is the core open source? I wonder what language it’s written in?

~~~
robotdan
The core is not open source, written in Java.

