JWTs are good for API tokens, not end users. If you use JWTs for your cookies/session state, then you'll still need to maintain a table of valid JWTs and expire them when the user logs out, and thus you will not be able to use JWTs as they were intended to be used, which is as stateless tokens.
Most security auditors/certifications/compliance requirements do not consider even a 5 minute lag acceptable for user driven logout. The definition of logout is that when the user presses the logout button, all the session data is invalidated server-side during roughly the time it takes for a request/response roundtrip.
Thus there is no benefit to using a non-opaque high entropy string for user sessions, even though there is a good benefit to using JWTs as API tokens for back-end services (esp. microservices) that do not have browser logout requirements.
I've seen plenty of people make the equal and opposite wasteful mistake: they insist on using a shared state store so that logins can be properly revoked, then they notice that every server looking up the session in this shared store on every request is having a significant performance impact, so instead they have the server cache sessions locally for 5 minutes (or even longer).
That is the cool thing about JWT. You get the best of both worlds. With JWT you don't have to hit the auth server for requests where a ~5 minute (or whatever) window doesn't matter. That is probably 99% of the authenticated traffic on a lot of sites (thinking of HN, reddit, youtube, etc). On requests that matter, like on a write request or a "sensitive" read request, you can always ping your auth server to make sure the token is still legit.
Cutting out 99% of your auth backend traffic can dramatically change how you design it.
Which happens to be something that computers are designed to be really good at. I've never worked at a company where auth or storing user sessions was a bottleneck.
I worked at a place whose auth server system literally serviced tens of billions of request per day. 99% of that traffic was for "dude was logged in and is looking at some page that is not unlike the one you are reading right now on HN". JWT let us cut that traffic by 99%. The remaining 1% was for servicing refresh tokens and handling requests for "sensitive" traffic (i.e. account page, anything involving writes, anything dealing with billing). This was in a system with data centers across the globe.
Auth might not be your primary bottleneck, but each page load you service has a latency budget and every millisecond spent being blocked on some auth call is a millisecond that could be spent doing else. And trust me, your database or crazy "solve it with redis" server isn't responding in 1ms, it's gonna be at least 20ms due to network overhead and the variance in latency on that request isn't always "20ms" it is "99% of the requests are 20ms and 1% are >>100ms". So now 1% of 10 billion requests (100 million requests) take way longer than 100ms... well that sucks...
Not to mention when your auth backend blows up and dies... your site is completely dead in the water. JWT can at least let you tread water for a while.
JWT is amazing. People that diss it don't understand it, or likely don't understand the tradeoffs they implicitly made with their own auth architecture.
>>> I've never worked at a company where auth or storing user sessions was a bottleneck.
I can tell you from my experience working at JP Morgan (the largest US bank) that authentication checks was the #1 root cause of major outages.
That is, except when it was #2. It was a close tie with network related issues depending on the season.
It's easy enough to understand that when sessions can't be checked, every application and service that need to check sessions are effectively 100% down.
Another hard learned fact is that (very) critical services or (very) high traffic services will simply never support remote sessions, because it's too slow and unreliable. (There are potential alternatives like JWT tokens or TLS authentication).
it's not the hash table lookup that is expensive, it's the network request.
Computers are not good at network requests because physics says they can't be - it takes orders of magnitude more time for light/electricity to traverse even one side of a datacenter to another than it does to go from the CPU side of the motherboard to the RAM side.
For plenty of use cases, the additional network roundtrip is very affordable. But it's simple math that explains why this might easily become half or more than half of the total load on your systems if you want to "just do an additional (network-mediated) hash table lookup" on every request.
The 100-2000ns it takes signals to move across a datacenter and back is not the primary cause of a network request being slow in comparison with a cpu.
10ms minimum to go to the next rack. For 99% of the requests. For the unlucky 1% you'll get outlier performance due to.... well... you spend the next sprint figuring it out and the next quarter trying to solve that scaling bottleneck.... It could be network, it could be some weird design thing, it could be redis, who knows? But yeah... enjoy figuring it out instead of focusing on your product.
1% of all requests is actually a lot of requests. And if you think 1% of your auth requests aren't 10x the median... you aren't measuring it hard enough.
What on earth kind of clownish network setup do you have where O(1) lookups over a network take 10ms? I worked on a tiny (3 people) engineering team serving 250,000 http requests/second 27/7/365 doing 5-6 over-the-network requests to other services (memcached, redis) per request we handled and our 99th percentile time to get a response on the wire was < 15 milliseconds.
That's exactly my point, you are agreeing with me. I was refuting the comment claiming that networks were slow compared to on-die comms because of physics and lightspeed.
The signals going through the wire and back are 2000ns in a big datacenter. There are orders of magnitude more delays in the processing of those bits than their transmission through space via twisted pair or fiber.
It's not 10ms to go to the next rack though unless your network is broken. You're much closer to 100us (100,000ns).
Yes, it takes 2000ns for the signal to travel at the speed of light. ledauphin never talked about the time spent processing the signals, which is incredibly odd because that's where most of the time is being wasted.
This is a very common pattern in networking, people underestimate the speed of light and overestimate the speed of network equipment.
On the front end you also have a user id in plaintext that you can use for request routing to send the user to the right shard in the right datacenter. That doesn't have to be cryptographically secure since cryptographic auth happens in a later step. By not having to decrypt anything your front end geographical load balancers can handle higher bandwidths (and more headroom for DDoS attacks).
JWT solves all kinds of problems, you just haven't encountered them or aren't sophisticated enough to detect them yet. I'd assert that all the solutions you might propose have their own non-trivial issues.
For example your auth server better be god damn awesome if it is gonna service every front-end page load. If even 1% of its traffic takes an order of magnitude longer than the median... that means 1% of your front-end traffic is gonna take an order of magnitude longer too. Lord help you if that fucker hiccups for a minute every so often... JWT shields most front-traffic from auth-server latency issues because most front-end traffic never needs to talk to the backend. (And no, your auth server isn't perfectly designed and yes 1% of its traffic will be an order of magnitude slower for whatever random reason)
Most web apps that use "normal" auth don't call an auth service. They drop an encrypted auth cookie that is checked on every page load and which is very fast. Session data and any centralised auth check (banned user for example) are simply database calls, which are no slower than the perhaps 10 or 20 other calls you are probably already making on your page.
> or aren't sophisticated enough to detect them yet
> Session data and any centralised auth check (banned user for example) are simply database calls
JWT doesn't make sense if your app can simply make a direct database calls for authentication check. But if your app is composed of myriad of microservices which aren't allowed to query the database directly and must call another service to access data outside the caller's scope, then yeah the only option is making the authentication service fast enough to tank all the authentication checks or use jwt which makes any microservice able to confirm authentication status without querying the auntentication service.
Which is another reason why I think microservice architecture is a giant waste of time and productivity unless you're google-scale.
> Session data and any centralised auth check (banned user for example) are simply database calls, which are no slower than the perhaps 10 or 20 other calls you are probably already making on your page.
They're calls that you have to complete before you can start making any of the others (since it's not safe to start making database calls based on a user that hasn't been authenticated/checked for bannedness yet), so you're talking about more or less doubling or tripling your page's load time (check authentication, check authorization, then do the queries that give you the actual data) if you've got decently written queries.
That's not even arguing in good faith, or shows an extreme lack of experience. Auth/auth is routinely cached. Even if it's a database call, the query used there will likely be cached. Auth is a miniscule portion of a web page's performance.
Which completely undermines the argument for not using JWT. "We can't use JWT because then tokens would be valid for 5 minutes after the user logged out" - but if you're caching your auth for 5 minutes then you have exactly the same risks.
They're not even good for API tokens. There are valid use cases for storing data in a signed client-side token, but if you're going to do that then don't choose JWT as your serialization format - there have been too many alg:none or other issues. Just do hmac-sha256(data, secret) instead.
Middleware like Nginx (at least the Nginx Plus version) know how to take apart JWTs and use the stuff inside. The same can't be said for your arbitrary custom serialization format.
The real use of JWTs is when you want third parties to be able to sign their own API requests, that your API will accept using the very same machinery by which it accepts your first-party-client API requests. If you use JWT, you can just pass the third party a private JWT signing key to use, add the corresponding public key to your validation set for JWT decode on your backend, and everything "just works."
A better use-case is having a third-party upstream service that you want your own service's users to be able to talk to directly — i.e. without using your service as an intermediary gateway. If you configure your third-party SaaS with your own JWT public key, you will be able to have your users
1. Log in with your service, receiving a JWT token
2. Send that token back to your service, as the auth key for authenticating as themselves with your service.
3. Also send that same token to the third party service, to authenticate requests against your service's account.
(One place where I just set this up the other day: allowing users of a crypto app to make authenticated requests to Infura via the MetaMask browser extension, where we're paying for the Infura credits to enable that. We send the user a JWT token; the client JS uses that token to build an Infura URL, and configures the MetaMask SDK with it. Infura accepts the requests as coming from "us" because it recognizes the JWT signing key we configured.)
We hit this case with a customer service where each “domain” (transaction management, fidelity, auth, catalog data, etc.) was handled by a different internal org, with an orchestrating layer on top.
Having the emitting entity sign their request in a standard format that can be checked by anything having with their public key helped a lot.
Most web frameworks support JWT out of the box so just calling a web service written in these means it is preferable to use an existing format rather than rolling your own.
Sure there are some gotchas but it isn't that hard to read up on that once, configure your JWT check and then leave it alone.
Also, we pass additional claims in the JWT that avoids the need for the web service having to check certain permissions or auth status itself (like what resources the user has requested) and this is going to be a mess if you try and "just HMAC(SHA256())"
Or use rsa for the token signing. 3rd party keeps the private. They send you the public. You can verify all the tokens. They are the only ones that can create new ones.
The point of JWT is that it's an ecosystem of stuff that all has built-in support for doing this. You don't have to arrange for this kind of signing "by hand"; on most services that have JWT "accept" support, there's a portal with a configuration field where you can just paste in a public key. Same as adding a line to ~/.ssh/authorized_keys, but for web requests.
(Yes, all of this is just a bad imitation of TLS client certificates. But browsers have a horrible, intrusive, non-syncing UX for client certificates. If they did with client certs what they do with cookies/passwords, nobody would have bothered to invent JWTs.)
Or use SHA3(secret || data), it was specifically designed to not have the pitfalls of sha2 which made HMAC constructions necessary along with being less computationally expensive. There's also KMAC built from keccak which has some advantages over simply hashing the concatenated key and message.
A friendly reminder for the 99% of people here who, like me, don’t live and breathe this stuff:
Don’t “just” use some crypto construction you read on HN. Don’t invent your own thing. Find a well respected crypto library (eg libsodium) and look up your use case in the API documentation. Eg, for authentication libsodium exposes these API methods:
I provided a link to a NIST spec on this exact issue and a stackexchange discussion with lots of eyeballs/votes. Sure, don't just listen to me, but please do listen to them.
Using sha3 as stated above is certainly something to consider for the numerous people out there implementing message authentication into their systems. It's simple and hard to mess up, if I was building a greenfields project today is probably the tool I'd reach for.
Libsodium is fairly complex and had it's own issues. Sha3 implementations in most languages are battle-hardened and simple to use.
As that version of the post alludes to (in talking about D-H) the problem is that the people who most need these answers don't know how to ask their question in a suitable way. The NaCl/ Sodium APIs are oriented around applications. You want to do X, Sodium says you should call this API and the things you get back have the following properties.
For example, what Sodium will actually do for the API your parent mentioned is HMAC-SHA512-256 ie it's using SHA512/256. But rather than give that as advice (and risk some developer either trying to re-implement SHA512/256 because their tool doesn't offer it, or worse, thinking SHA512 sounds strictly better so they'll do that) - Sodium just implements the thing you should do.
But if I were going to write a snarky "Right Answers" type post about this subject what it would say is that JWT almost certainly isn't the solution to a problem you have, it might be a solution to a problem you wish you had, but you likely have actual problems, now, and you should work on those.
Why do I say "wish" you had? Well as we see in a lot of these threads, immediately people talk about how it'll be a struggle to scale up the naive session approach when you have this big geographically distributed system. If your startup becomes Facebook or Google, you'll have to rewrite all this stuff. Why not instead use JWTs?
Your startup isn't going to be Facebook or Google. When you prove me wrong you'll have piles of cash for engineers to fix the scaling problems you knew about and, much more importantly, all the other problems you'd never dreamed of. You are attempting premature optimisation.
Why would you not like a solid mechanism for verifying that a data packet hasn't been tampered with? In the case of JWTs which live in the browser, it would be essential that they are signed.
dotnet auth tokens are also signed, they are stored in cookies, is that any different?
> Wouldn't you just need a list of logged-out JWTs?
Sure, but that's not a real improvement. It's still stateful since you need to check the list with every request, you can't just verify the signature. Changing the logic from "must be in this list" versus "must not be in this list" does not give you much -- maybe some performance gain, depending on the situation.
> And that's if you really cared.
Well, that's my point. Developers may not care, but from the point of view of a security auditor, it's a vulnerability in your auth system, so the GRC/Security people do care, even if Foo start up doesn't. At some point, if Foo is successful, they will face this issue and need to support real logout to get the certifications they need to do business or be in compliance.
> Changing the logic from "must be in this list" versus "must not be in this list" does not give you much -- maybe some performance gain, depending on the situation.
As someone who implements this exact solution, it actually gives you a lot. If your JWTs have a relatively small timeout window (say N minutes), then that means the number of expired tokens you have to keep track of is usually very small (e.g. for everyone who clicks the logout button with a valid token you have to keep that token around for at most N minutes), and on many websites the average user never makes an explicit logout request.
So that means that in many (most?) usage scenarios you can just replicate the list of recently expired tokens to your servers (easy with something like Redis) and then just check incoming requests against that in-memory list. Again, this isn't anything special or new, it is discussed in the article.
This is very different from storying the list of every valid JWT, which will often times be much bigger than you can store in memory.
JWTs expire, so you only need to maintain a list of invalidated JWTs that haven't expired yet. You get two short cuts, one that you only check for revoked tokens which aren't expired and have valid signatures. This can save a lot of DB queries given you usually have to check auth validity per request.
How does that help? You won't know which tokens should be considered expired in-advance, you still have to call the DB for every request to check. There won't be many that are called with expired tokens compared with people using the site constantly and always having valid tokens.
Agree, a lot of webapps can just ignore. Suppose you have a 10 min refresh time. Logout just deletes the client-side state and in 10 minutes the user will be "hard" logged out since their token hasn't been refreshed.
Similar security to "abandoned session" which is of course what a very large % of users do instead of "logging out".
There is risk of cloned session, but there's no law saying non-banking apps have to care.
This has the nice property that if you're forced to, adding a cache to store invalidated tokens with a ttl of 10 minutes is an incremental enhancement.
You can "just" use a per-user "generation" key, which invalidates all keys when incremented server-side. The list can generally be mainted by polling every few seconds or via a websocket if you need the fastest possible expiry. Since most JWT libraries allow you to customize the validation function it's often pretty trivial to add, and overall comes only with a small memory penalty.
The persisted state of user access can be two of consistent, available and partition tolerant. JWT uses cryptography to expand partition tolerance at the expense of consistency. The lower the requirement for consistency latency in user state, the lower partitions are tolerated, such as geographic availability, multi-rack, multi process and so on down to not allowing client-server partition. JWTs are fine when partitions are more important than consistency and a pragmatic choice to avoid centralized state on a backing service.
If your environment is such that 5 minutes of lag between logout end a token being invalidated, you can always just verify the token against the auth server per request.
JWT comes in handy for most of your site where it isn’t such a big deal. That way you aren’t flooding your auth server with calls; all your web servers can just validate the token themselves up until a refresh is required.
Basically JWT let’s you have your cake and eat it too. Significantly less load on your auth backend while still letting sensitive parts of the site verify on every request.
Because a lot of people see security in terms of black and white instead of a lot of shades of gray. Not everything requiring authentication is your bank account. HN, for example, could have even a 10 minute window and be okay.
Not having to block all your request waiting for the auth backend can improve your latency by a non-trivial amount. It also can make your auth server much less complex.
JWT is awesome. People just think it is "all or nothing" and implement it wrong.
That's generally my thought, too. In my career the authentication sophistication was always a trade off (effort vs security) based on how paranoid the org felt about certain things.
Having worked on PHP/Rails/.NET Webforms/Node.js apis/C# webapis the session management techniques are all very different.
Having run high traffic websites also, the ability to offload logged-in sessions into JWT was really, really nice. Not having to prune auto-generated sessions from disk of PHP applications is a nice thing to not worry about.
Because the server thinks that the session is still alive. The server doesn't know that the client has "dropped" the session. If the dropped token should be somehow be found again, it still has all the authority that the original session did.
This got flagged by our pentest vendor as well, but to me it sounds like a very unrealistic attack vector.
If one can steal a token from your browser's storage or can intercept your HTTPS connection, then sure, yes, you have a huge vulnerability even without JWT.
In OpenID Connect, there are back channels that talk to each other so that all devices know you have logged out.
If you are using traditional auth then yes, you would need to track session state against the user so that when they access from a particular device, you could check whether they have already logged out.
Some more complex apps use a background thread to check auth status with the server.
> it sounds like a very unrealistic attack vector.
It isn't unrealistic at all. A lot of attack vectors seem unrealistic to the victim until they happen. How realistic is it to think that a hostile actor can inject their code into a build process without touching the disk, then get that corrupted executable signed; then that signed executable is installed by 18000 users with root permission?
I’ve only done a few JWT implementations so forgive my ignorance but isn’t this what “exp” and token refreshes are for?
If you don’t want to maintain a centralized session store and you don’t want long lived tokens just set the expiry to 90 seconds. When you attempt a refresh, if the user is logged out it will fail.
I’m really confused by the either/or of “hit central session store on every request” or “tokens that live five minute too long are bad mmmkay”.
Huh? To track if the user is logged in (for longer than 90 seconds) you’ll need another token, used to refresh the JWT token. And you’ll need the normal session infrastructure to validate that anyway.
That sounds like a very complex system to cache session tokens. You may as well just cache normal session tokens for 90 seconds in your application servers. Same result, less complexity.
I think they are saying that to get that JWT would not be easy or likely, since it would need to be used within 5 minutes as well.
Although unlikely, I guess people want to know that clicking logout just does that and that there isn't a chance that some kind of hacker, colleague, nerfarious company or country couldn't continue to use your apps as you.
For users, I like to have a secret key per user and put something signed with this key in a cookie to represent the session. Then "log out everywhere" is as simple as regenerating their secret key.
No need to have to persist state for each session, which is nice.
I do those anyways - from my user table. But I don't have some extra table of sessions that is append-only (modulo expiration). Cuts out an entire system.
Because the session material could be compromised. How do you revoke tokens on lost laptops? What if you accidentally checked the token into Github and realized your mistake? What if malware captured the token? You have to be able to revoke access on the server side.
You can use JWTs and have a "revoked_sessions" table that works like a normal "sessions" table. At the start of the request, check if the token is in the "revoked_sessions" table and deny access if it is. But there simply isn't a point; this has the same performance and latency considerations that a normal sessions table has ... but with many more possible ways to screw it up. "Sessions with extra steps."
If the session material is routinely compromised, why use that method of storing session material? Make sure cookies are httpOnly, Secure, and where appropriate, use Strict instead of Lax.
I can understand the need to invalidate sessions server-side that might be malicious or fraudulent, to create a button in your app that says "Revoke all existing sessions", etc. And I can understand that other rules say a logged out session must not exist, and therefore you have to revoke the session. I especially agree if the JWT has sensitive roles in it, such as for an admin user.
But ... for most OAuth that uses JWT, I think keeping a short-ish expiry of say, 1 hour, combined with using cookies to store it is relatively okay. Perfect? No. But not terrible either, assuming you validate your JWT by specifying RS256 or whatever algorithm you expect, validate against cached JWKS and check both issuer and audience, issued date and/or expiry date, for example.
JWTs definitely aren't simple to validate unless you're lucky enough to use a really bullet-proof library out of the box. On the plus side, there really are only what, a half dozen steps to memorize about JWT validation. :D If new to JWTs, I highly recommend pasting one into https://jwt.ms and have a look. Yes, there's a lot to goof up, but they're relatively simple even so.
That said, I routinely use session cookies with a Redis KV session table to keep the JWT if I need it for other APIs etc. The one gotcha is making sure the random value for the session cookie doesn't already exist in Redis; using a timestamp-derived random string can help there, other than that, you're fine...
Oh and this is just my two cents but - never issue your own JWTs unless you're prepared to watch for bad actors forever, ideally have an HSM to store your private key material and are prepared to publish and add keys to your JWKS so you can rotate keys "easily" if they're compromised. I would not reach for JWT when you could just as easily design an API that uses distributed KV stores to validate if a session exists. However, at scale, JWT works great. And for OAuth, that means JWT is more practical than asking a cloud provider if someone's session is valid with every request. Basically, you'll know you need to issue JWTs when the benefits outweigh the complexity of maintaining it securely.
I would say though that this advice on JWT issuing might change in future: After all, Istio made running a PKI distrubution infrastructure a lot easier than before it came along allowing for mTLS between all the things, assuming your Kubernetes is secure, of course.
And the same advice on not issuing your own JWTs applies to session cookies like Flask's that are protected only by a server-side secret. Someday you'll need to rotate that secret and when you do, everyone's sessions will vanish forever. That's probably a good thing, and not a risk you'd see frequently, but when combined with the data exposure of keeping session data in potentially clear text (or base64 encoded text) which applies equally to JWT as to cookie stored sessions, well, it can feel a lot nicer to keep session data in KV storage. If you have really long JWTs, it's more practical too, compared to sending multiple cookies each under 4KB long to store a 7KB JWT with lots of extra fields and roles, etc.
Have the user change their password, and store datetime_password_last_changed on the user model. Then just treat any tokens issued before that time as expired. There are some use cases where storing tokens might be necessary, but for most startups this is probably not a good idea.
> If you need to make a DB trip on every request, what advantage are you getting from JWTs?
Because you don't need to check the DB every request! That's the cool part!
You only hit the DB for requests that matter!
For example, do you really need to check the DB to load this very page? If the auth token expiration is say, 5 minutes... does it really matter if a bad guy gets ahold of that token and loads this page? They can't post or vote because for those requests... you can pass the token to your auth server to be validated. 99% of your traffic is probably servicing the loading of this very page. 1% (or probably closer to 0.1%) is any kind of traffic that you actually care if the token hasn't expired within 5 minutes.
The problem people get into is they think security is black and white. It's gray. Some things need to trade latency for security... like write traffic. For 99% of the read traffic on HN? Who cares... even 10 minutes or more might be okay depending on the business.
People invent these complex pub-sub systems and stuff... they are working under the idea that all requests need to be super-secure. Most of it doesn't. I'd assert for most businesses.... 99% of the authenticated traffic could deal with a 5 minute max window after logout where somebody could maybe, theoretically impersonate somebody. And for the 1% of that traffic where it matters, you can always hit the DB.
"for most businesses" you're never going to have enough traffic that a classic session database (even stored in, gasp, MySQL) would be a bottleneck.
JWT definitely has it's places but it has so much complexity, so many tradeoffs and footguns that it should only be used when you have a specific problem with a standard session system that you need to solve, not as the default.
As the article mentions in a sidenote, a time-bounded revoked sessions table is small enough to keep in memory and can be updated via pubsub for the relatively infrequent case where you need to revoke one.
It's definitely more complex but at certain scales it might be worth it.
The "keep in memory" part is gonna be hard as you scale to multiple servers. You run into all the issues that lead people to stick an oracle DB behind their app servers. I agree that these considerations do pan out in some cases; I'm not saying that any of these proposals can't be the right way to go depending on your overall design, but now we are squarely in optimization-land and have long given up on statelessness.
The issue is not deleting the cookie. The issue is that the JWT can still be used as a valid session unless you have some sort of blocklist. Once you introduce a blocklist, you might as well ditch the JWT and use some opaque token stored in a table.
Is that a problem though? Does your threat model include clients that correctly send logout requests when they're supposed to but don't delete cookies when they're told?
Couldn't the previous user have rigged the app to not send the logout request, or way more plausibly installed a keylogger to get your password, which allows them to log in again at will?
I'm not disagreeing with you, it just seems to only cover a very specific type of attack, if someone else can mess with software. And of course if they can't it is unnecessary.
If that's how you feel, don't trust them to logout when they walk away. Make those sessions short-lived and ask for the password again whenever they do something remotely sensitive.
Because a user could simply provide the cookie again on the next request. If you still see the JWT as valid even though you deleted the cookie, the user could stay logged in during that time.
Cookies are managed by the browser, hence they are ultimately controlled by the user.
Depending on the site, that might be fine. For parts of the site that deal with sevitive data… just verify the token with the auth server every request line you would old school.
It really depends on the site and how much you care about having a window between logout and all the tokens becoming invalid. Most sites probably have large parts where it is a acceptable to have a few minute interval.
So all of reddit, every page, needs to reverify its auth token every request? Like, what is the worst that is gonna happen in a five minute window between logout and the token expiring? Sure make the account management pages reverify on every request... maybe even revalidate when you perform a sensitive write action (which is probably far less than 1% of the authenticated traffic). But every read request for a normal user? You are telling me that those all need to re-verify every request?
In the case of reddit, by not validating the auth token on every (non-sensitive) read request they could probably shave off 99% of the traffic going to their auth server. Better still it improves page latency as each request doesn't need to block on waiting for a response from the auth server. Faster page loads! And, since you were smart about it... all the stuff where it actually does matter that the token gets verified right away... you can do that too! It's win, win!
A lot of people mis-understand JWT or see it implemented incorrectly. JWT is pretty rad, honestly.
There is a much more straightforward way to handle session expiry not mentioned in the article: in most cases your application will need to get something like a "User" object from the database. That object can simply contain a version number that you increase when you need to log someone out. If the version number in the session does not match the one in the JWT you discard it.
JWT are useful where logouts are rare (games/apps), high API volume or some latency between logout and actual logout is acceptable.
Think of apps like Uber. Millions of DAU, but rarely people explicitly log out. If your logged-out user list is 5-6 order smaller than active users, it is easy to replicate/cache that. Very high API volume to many different services. Apart from few, majority of APIs won't have major security implication, if called by a rogue actor calls them.
So yeah, banks shouldn't use them, Farmville/Uber using them is perfectly fine.
Games need the ability to ban players immediately, which at the very least requires a centrally stored blacklist. That means either a query for every request or pushing the blacklist somehow.
> Games need the ability to ban players immediately
Do they? I haven't cheated in games for years, but the only game I can think of that took any immediate action was StarCraft: Brood War. There were a few hacks that would result in your removal instantly. You were not banned, just disconnected.
Halo 2, WoW, RuneScape all banned cheaters in waves. As far as I know they did this as a measure to further impede the cheaters trying to run tests about what actions and behaviors result in negative consequences.
My (naive) approach would be short lived tokens, and a mechanism for disabling and accounts ability to refresh said tokens. I wouldn't mix account management and billing with gameplay. This would in turn force users to open a different session when trying to access this authenticated data and effectively block bad actors immediately.
The amount of JWT tutorials available right now is absolutely overwhelming. I found myself worried actually that session tokens were insecure just because I could not find recent documentation/tutorials on them.
I asked for suggestions on r/node for alternatives to JWT and the first comment was "Just use JWT".
Too funny I haven’t been doing front end for very long, maybe a few years..
I knew of the existence of JWT so that’s what I ended up rolling with on my first true frontend heavy project. JWT token tutorials are an excellent example of my theory that “the internet doesn’t know shit, don’t trust it”. Case in point google “JWT ReactJS”. 8/10 tutorials are storing them in localstorage. At the time I knew literally nothing but I knew that was dumb as hell
One reason is if you store refreshTokens in localStorage, a hacker's XSS attack can send the refreshToken to himself, and keep refresing the token forever (or until somebody actively invalidates the session) - and thus use the token for whatever evil purpose.
> [...] asked for suggestions on r/node for alternatives to JWT
Heard good things about paseto[1]. It claims to be convenient as JWT minus the security issues entailed. Check it out, has some decent libraries for most main-stream languages. Didn't try it myself yet.
Because JWT is the right answer for most scenarios. It's the best of both worlds when done correctly. You aren't blocking most of your requests waiting on your auth backend and for requests that actually need to have up-to-the-instant knowledge of a token's validity, you can always elect to hit up the auth backend anyway.
It provides a hell of a lot of flexibility in terms of how you design your authentication architecture. For example it can let you use the same set of tokens for all your API's as you use for "web" traffic. It can reduce page latency. It can substantially reduce the footprint required for your auth backend.
It's pretty awesome. It is just mis-understood because people don't understand it. They assume that you have to invent some kind of "token revokation" system when people log out... nope. I assert unless you are a bank or something, 99% of your authenticated traffic is read-heavy and can tollerate 5 minutes worth of somebody getting ahold of a token that was used for a logged out session. All the write traffic and "sensitive" read traffic can just hit up the backend server to do real-time token validation.
For example here on HN 99% of authenticated requests are just to view this page, all this traffic doesn't need to insta-verify the auth token. The "post comment", "vote" and "change password" traffic makes up >1% of the traffic and can be "insta-verified" against the auth server. Who gives a shit if somebody impersonates that session for up to five minutes if they still can't comment or vote? The odds of it happening are minuscule and the damage isn't really bad...
If the simple thing is sufficient, don't do the complex thing.
You say that JWT is the right answer for most scenarios, I think that's not true. Aside from revoking, there's risk in complexity and I did my best in providing evidence that complexity causes real-world issues. You allude to this when you said "if done right", but that's a big if. I think this tends to fall deaf on some people's ears, because nobody feels that they might be the one to mess up JWT and inadvertently introduce vulnerabilities. Perhaps that is true for you, but the added complexity _is_ causing real-world security problems for actual developers.
The advantages I'm hearing is 'not blocking requests' and 'reduces latency' (which sounds like the same thing), but how much does this really matter? How much latency do you think this is? Has this really been a bottleneck for you?
You might well be part of the small group where every 5ms matters, but it would be disingenuous to suggest that that's true for "most scenarios". Most systems can bear an extra Redis fetch.
I'll concede it might just work better/simpler in your architecture, especially since you mention a dedicated auth backend, but shaving a few ms of a request is not a good enough universal reason given the drawbacks.
> If the simple thing is sufficient, don't do the complex thing.
I fail to see how JWT is complex. It is actually quite simple--it's bog standard public key cryptography. Hell for most requests, even your damn load balancer can validate the auth token, your front-end servers might not even see the request for an expired token!
I would assert all these homegrown "just throw a redis instance at it" solutions are far more complex. Now you gotta deal with cache invalidation and that isn't fun. Plus it is a network request, and that takes a long time... time which I could spend doing something more useful for my customer.
The decision tree for JWT is easy:
- Is the token expired? Yes -- Return 403 (note: your load balancer can do this)
- Is it for a "sensitive request"? Yes? Verify token against auth server
- All other requests (99% of your traffic) - Validate the token locally.
I've provided data in my article of many instances where people got JWT wrong, including an example from Auth0. I wouldn't call myself a security researcher, but many who are have also said this.
Your comment that it's 'actually pretty easy' feels pretty reductive. If you're really making this argument, I feel it would be good to provide some evidence.
I've been doing this for a while, and in my experience a large group of devs would likely not be able to explain public key cryptography, despite having ownership over features and/or applications. This is not just about you and what you find easy, this is about the entire community with a huge variety of experience levels.
I did... I made a decision tree. It's really that easy. The hard part is deciding what requests are sensitive and which aren't and that's a business decision not an engineering decision.
Adding redis or some crazy pub-sub crap to deal with logged out tokens.... that is truly making it far more complex. JWT is all about you deciding which pages truly need real-time "this token is invalid" and which don't.
Once you decide all requests need real-time you either are lying to yourself or JWT truly isn't the correct answer.
You keep parroting "complexity", yet in your original argument you stated that for "sensitive" requests, you should just hit the backend anyway.
So that implies that you already have a database setup and you are already using it in your app. That means there is zero extra operational or deployment complexity (compared to implementing a completely new and different bloated system.)
So all that network latency and cache validation come for free? What if the system isn't in the same LAN and has a few hops... hops that may involve the speed of light over a few thousand miles and all it's slowness...
I don't think you've thought about the problem space enough.
> I don't think you've thought about the problem space enough.
God grant me the self confidence of this fella.
I’ve been working as infosec engineer and been arguing with frontend engineers ( note its usually an individual) wanting to implement JWTs for years. Large companies with complex distributed systems, and the expertise to run them, however unless you’re doing some heavy API stuff or you’re Google scale it almost always is the wrong choice ( more often the request highlights an engineer who had almost exclusively been in the JS ecosystem and has very little backend experience).
I don't think you have thought about the problem space enough.
Nobody has disputed that JWT reduces network latency. My argument is that JWTs are unnecessary complexity for your app, overused, potentially dangerous [0] and for 99% of use cases: probably unneeded.
Let me demonstrate my claims.
Average TTFB (Time to first byte) on mobile is 2.594 seconds [1]. TTFB is the duration from the HTTP request to the first byte received from the server.
> a few thousand miles
Sure. Worst case scenario, right?
I set up a MongoDB Cluster on Mongo Atlas. I selected the AWS Region 'eu-west-1', which is housed in Ireland. I am from America, so I estimate this physical distance at 3,500 miles. A true worst case for any web app. I plugged in some example tokens and then wrote an application that pulls down 1,000 random tokens.
My average latency was 113 MS.
So a "worst case" database connection scenario is only 4.3% of the TTFB.
Do you use a client side render framework allowing you to defer or asynchronously make database requests? Well, considering fully loading a webpage on mobile takes 27.3 seconds [1], that database request is now 0.41% of the time it takes to fully load that page. Less than 1 percent.
So the "best case" JWTs offer is the TTFB is reduced by 4% or full page load time is reduced by 0.4%. Is that a good thing? Yes, of course. Is it worth the trade offs? No.
Let me reiterate the tradeoffs that I personally see.
- Complexity in development, deployment and operations. (This is less of an issue if you started with JWTs.)
- Every single request your app makes is now larger, either slightly or significantly. (Larger requests also take more time, I'd wager that the 113 MS difference is much smaller when comparing a request with a JWT payload)
- Potential security issues ([0], [2])
- Unable to revoke tokens on demand (There are ways to achieve this, but you're literally implementing session tokens at that point...)
Your whole argument implies that TTFB is the most important metric. I can't imagine the first thing that startups do is spend time, money, and effort on trying to reduce their TTFB by 100 ms. Also, reiterating the literal article you are commenting on, "Statistically, most of us are building applications that won’t make a Raspberry Pi break a sweat." I can't dictate what individuals do with their own web apps but considering my web app does 1MS lookups to Mongo (not even redis), I can't see a situation in which JWTs are worth it.
JWTs are hype tech, and people love to ride that hype train. However, when we write code I believe we should be thoughtful as to how we write code. Before I start on a project or add a new module or even pick a database I look at the outcomes. What will this code do for my application? What will this code do for my users?
My users might have to deal with a few milliseconds of latency. Your users may have to worry about theft or loss of their personal data [0][2][3].
[0] event-stream survivor here. The most popular JWT module on NPM right now with 6.3 million weekly downloads has 15 dependencies from over a dozen contributors. Even if JWT is a secure standard in itself, that is a large attack vector.
[3] You yourself said that "unless you are a bank or something [...] can tollerate 5 minutes worth of somebody getting ahold of a token that was used for a logged out session". 5 minutes of someone accessing an account they should not have access to is a unacceptable and huge security hole.
I generally agree with most of what you've said about JWT, but I think it's better retrofitted after you have performance issues.
First pass, just do a database query! Most startups never hit the limits of this approach. It's hard to get simpler.
Second pass, cache the state in memcache. If you're on app engine or heroku or some other paas, you already have it available. Even fewer startups hit this limit.
Third pass, it's time to break out JWT. Congratulations, this is a great problem to have.
First of all, if my requirements are "not JWT" then the correct answer is not JWT.
> You aren't blocking most of your requests waiting on your auth backend
Yeah, at Facebook scale. My database responds in less than 1 ms.
> for requests that actually need to have up-to-the-instant knowledge of a token's validity, you can always elect to hit up the auth backend anyway.
So... what is the point of JWT when I always need to know if a token is valid?
> For example it can let you use the same set of tokens for all your API's as you use for "web" traffic. It can reduce page latency.
Again, my database responds in MS. If you're doing client side rendering anyway, why not just throw in middleware to check the session token? You are trading literally 1 millisecond of latency for unneeded application complexity.
> I assert unless you are a bank or something, 99% of your authenticated traffic is read-heavy and can tollerate 5 minutes worth of somebody getting ahold of a token
Read heavy traffic does not imply that 5 minutes of stolen credentials is okay. Could my app survive if someone stole a token and used it for 5 minutes? Sure. Do I want that to happen? No. Without JWT, I can revoke tokens in milliseconds. I can revoke tokens if IP is changed. I can revoke tokens if user agent changes. I can revoke tokens if a user rotates their device.
> All the write traffic and "sensitive" read traffic can just hit up the backend server to do real-time token validation.
Again, what is the point of JWT if you still need to hit the backend? All read traffic is sensitive to my app.
> For example here on HN 99% of authenticated requests are just to view this page
Sure, then why use JWT at all? Keep the profile name and points in cookies, upvotes in local storage. Who gives a shit if the data is a little stale? Right?
JWT is unnecessary bloat in my opinion. At Facebook and Google scale I can see how saving billions of database calls a day could be useful. For people with less than a million page hits a day, probably not.
I feel like the author doesn't mention one thing that I always thought was one the best advantages of using tokens stored in browser localstorage instead of cookies which is that the risks of CSRF attacks are lower/gone?
Cookies _always_ get sent with every request, so an attacker site can make a request to your API and if the user is logged in there, they can do as they please with that user's session... so CSRF tokens then need to be sent with any request which adds a bunch of other complexity.
If authentication is handled with a token stored in localstorage rather than a cookie, no other site has access to the token and thus through more authentication hoops one need not jump.
Am I just horribly confused and wrong about all this?
Yes, unfortunately you've picked the wrong issue to be scared about.
The CSRF attacks you're describing have been solved by browsers with the SameSite attribute on cookies. If you set SameSite=Lax or SameSite=Strict, cookies will not be sent with requests originating from a different site.
In choosing localstorage over HttpOnly cookies, you've put yourself at greater risk in the event of an XSS attack (meaning, an attacker running their javascript code on your own domain).
When your users' session tokens are in localstorage, an XSS attacker can exfiltrate the tokens from localstorage, and potentially transfer them off your site to their server, where they can then cause more havoc.
HttpOnly cookies prevent this problem because the XSS code can't access the cookie value, so they can't be exfiltrated and transferred elsewhere.
> The CSRF attacks you're describing have been solved by browsers with the SameSite attribute on cookies.
This is only partially true; It requires the users to have browsers which respect this. Which is probably true for 99% of users now, but not 100% (pasting a comment from a project I worked on recently):
/*
'lax' means cookies will not be sent for cross-domain requests on modern browsers, except
from top-level navigations with a GET or HEAD request (which are safe). This mitigates
CSRF without any additional token handling in browsers Edge (since 2017), Firefox (since
2018), Chrome (since 2016), and Safari (since 2018)
*/
> In choosing localstorage over HttpOnly cookies, you've put yourself at greater risk in the event of an XSS attack (meaning, an attacker running their javascript code on your own domain).
XSS an issue regardless of whether SameSite is set, they just can't access the session cookies via Javascript (assuming HTTPOnly is also set). An attacker that was able to inject a script into a user's browser on your own domain can get around CSRF protection, and session cookies will be sent. The only disadvantage localstorage has over this is that it can be sent to the attacker's server as well.
I agree cookies are much better for auth* information; if we're using JWT, those can fortunately be put in localstorage also right?
That's a bad tradeoff. Local and session storage are not secure. They were never meant to be. In storing JWT in local or session storage you've trade CSRF away for XSS in return. The latter is MUCH harder to secure against. Any scripts loaded on your page, whether served by you or a third party, can access local or session storage.
You can even try to save the JWT in "memory" by saving it to a Javascript object but that's still vulnerable to any script that runs from your page and knows where to look.
It turns out the only really secure storage mechanism on a browser is... cookie with httpOnly, SameSite, and Secure flags set. If you're going to use cookies to store your JWT, well, you might as well just use cookies?
JWT has its place but it's a bad trade to give up cookies in favor of that.
Many of the concerns about JWT are pointing to the problematic governance and questionable choices of the standards committee, such as being unencrypted by default, and the reliance on defeatable browser safeguards (e.g. CORS, even localstorage(!)), rather than primary behaviours (like HttpOnly), to preclude theft of credentials.
From a risk perspective, JWTs consequently present a larger attack surface, have greater intrinsic potential for inadvertent screwup, and they skew towards a larger blast radius if compromised.
The decision frame for security hats is never (willingly) "can this mechanism be made secure enough", but "what's the worst that can happen?". Anecdotally, once you've started finding design misfeatures in something, it's only a matter of time before more (and perhaps more subtle) issues, come home to roost. Security folks can be super judgemental about competence, and some will even give voice to this outcome by deriding the maker as much as the thing made, but there's a bayesian truth to it as well.
What's the advantage over storing it just in the cookie then, since you need to look at it anyway, and since the localstorage storage half is useless on it's own (thus not useful from JS)?
No, you're not wrong. But I would go further and say JWTs should be obsoleted by using an object-capability model (see: http://habitatchronicles.com/2017/05/what-are-capabilities/ ) which is a more fleshed out way of using unique unforgeable "tokens" to access things. The ocap model is tailor made to solve the problem that CSRF is an instance of (confused deputy attacks).
I guess it depends on how you use it yeah, like the token can be used to refer to a specific resource, which is then granted to whoever has the token. But obviously you can just treat them as a way of doing traditional identity based access control if you want, which isn't the most effective way of using them. JWTs don't let you easily revoke them, or narrow access to specific resources. That's something you have to do yourself, or rely on a library to do, but that would be built into an ocap based system.
Do you know of any proposals for browsers to directly use capabilities?
Servers can implement a strict capability model for server-side objects but the browser's granularity for a capability owner is almost useless.
To be fair, no popular operating system makes much of a distinction between the human operator, the operator's actions via I/O, the behavior of applications ostensibly started by the user, and various other automatically started software, so at least browsers provide the concept of a domain for separate ownership.
The most promising things I know of are Google Fuchsia (which is a mobile OS), and Goblins, which is a Scheme library for doing distributed programming. There is something called Realms which is being worked on for JS, but I haven't dug into it too deeply yet. It might provide a good foundation for building ocaps into the browser, though.
As an side, the author mentions that people who use JWT instead of sessions can be likened to those who learn React/SPAs instead of conventional approaches:
> This is similar to many newer developers learning how to build SPAs with React before server-rendered HTML. Experienced devs would likely feel that server-rendered HTML should probably be your default choice, and building an SPA when needed, but this is not what new developers are typically taught.
I do agree with that point. It's important to learn the fundamentals before jumping into large frameworks and potentially over-engineering things in the future.
To be perfectly honest I thought JWTs made a lot of sense until I actually started trying to deploy them in a SAAS side project I was working on. It turned into a comedy of errors and I just decided to go back to cookie based authentication which worked out much better for me. I like the idea, but in practice JWTs were more hassle than they were worth.
I think you are overlooking the main benefit of JWT - is to allow authentication by a trusted third party, thus enabling single sign on, eliminating the need for password storage and management, login forms, etcetera.
Not sure why you’re being downvoted, but agreed. Tonnes of apps want to support SSO, and most modern SSO systems use Open ID Connect, where the id tokens are JWTs. If you want to let your users login via Google, Facebook, Okta, whatever, then you’ll be dealing JWTs.
And once you’re doing that, it’s a small step to say:
“even if my users want to signup/login via username and password, I’d rather not manage that myself. I’d rather use a 3rd party system to store the passwords, implement MFA and password recovery and signup/login forms and whatnot, and just trust the id tokens from that system.”
Auth0, Keycloak, FusionAuth, etc. And again you’ll be dealing with OIDC and JWTs.
If you want to manage your own usernames, passwords, signup/login pages, password reset flows, MFA flows, etc., then yeah, I’d go with traditional sessions stored in a db/cache that you manage, and auth tokens are just session ids. But if you want to outsource all of that, plus support SSO, then the id token you deal with will not be traditional session tokens, they’ll most likely be JWTs issued by someone else, where you just verify the signature.
The token is short lived (or should be!) and lost to the client. If that is good enough or not depends on the use case.
For example, if you don‘t trust the client you also cannot trust the logout button or know that the login credentials are not compromised.
So you would need a scenario where you get a token (let‘s say valid for an hour) and you use it on a computer that you trust and then someone can copy the token but afterwards you somehow trust the device again.
What am I missing?
Additionally most people never actively log out (especially on personal devices which are like 99%) so it’s practical to push a small list of invalidated and not yet expired tokens to all service caches in the rare case someone actually presses that logout button. Certainly much cheaper than checking them on each request or updating a cache of all sessions.
>So you would need a scenario where you get a token (let‘s say valid for an hour) and you use it on a computer that you trust and then someone can copy the token but afterwards you somehow trust the device again.
Wrong. You need to be able to invalidate tokens to other devices. (Lost device / revoked access use-case)
Ah, you want to logout everywhere. That‘s not how most logout buttons work. (I can logout of practically all my accounts only on one device.)
If you want to logout everywhere that‘s extremely rare and as I said such a table to capture this information would be tiny.
Additionally these tokens are short-lived. So even without token invalidation, the „logout everywhere“ can just invalidate the refresh tokens and all device are logged out within one hour.
You can validate whether the token is valid with issuer on each request if you want, furthermore issuers support log out functionality - so say auth0, has an endpoint that will kill the session. It’s probably a bit expensive to check the token on each call, but if your security model requires it....
Really not seeing the problem. In most non trivial apps authentication infrastructure is it’s own service anyway, so hitting auth0 on each call in practice works very similarly, up until certain amount of traffic, which I’ll never reach.
Then there’s no point in using JWT. You may as well use an UUID to hit auth0’s endpoint on each request. You really don’t get the article? Did you read it?
I’ve read it and even bookmarked it, because I thought it explains JWT well. But perhaps you’ll need to restate your concerns to get me on board.
JWT is a standard that comes with all of the benefits I described above and is used by many services that I can take advantage of. If the article tells me to switch back to some session id based scheme because it is supposedly smaller, I think I’ll pass.
Is that a practical use case? For a token that‘s valid for one hour? And the device is presumably locked. Is it really realistic that a user realizes quickly that the device is stolen and worried that it will be unlocked quickly and then actually logs in on another device to revoke the session of some website they were using (I wouldn‘t even know this is possible, how would a user know!) and that just to invalidate the token that would expire anyway within 30 minutes on average? Is that even remotely realistic to say nothing about „basic“?
I had (fully encrypted) devices stolen, of course, and I did revoke keys and changed passwords (just in case). But I never managed to do this within one hour and there was never a risk of anyone unlocking the device anyway.
With SSO, you don’t really control login/logout at all, that’s the whole point. For delegating authentication and whatnot to a system like Auth0, Keycloak, etc., for logout on a single device, you just clear the header holding the JWT. To invalidate the tokens completely, often you can’t, but you’re probably doing short lived refresh tokens, and you can make it so they can’t refresh. So at least they don’t maintain access for too long. i.e. https://auth0.com/docs/tokens/revoke-tokens
It’s a trade-off you make for not having to handle password persistence, and getting free-ish login/logout pages, SSO, MFA, password reset, etc. For many companies it’s a good trade-off.
Keycloak actually has user session management. You can call an admin api to logout a user from all devices and session. There is a draft proposal of session management in OpenID Connect.
Any infrastructure required to support log out - is far less complicated than one required to support secure password storage, login, changes, resets, recovery, etc. If it such simplicity comes at the cost of using JWT I think the tradeoff is very worth it.
I also seem to recall PHP doing this in the form of PHPSESSID in the query string.
It's a bad idea no matter who's doing it though. It enables accidental session-jacking (unless you turn on annoying countermeasures like invalidating sessions when IPs change and so forth), reduces cacheability, and a whole bunch of other things that come with leaking a secret in the URL. I don't think JWTs are a panacea but they beat the heck out of 2003-era session management.
Yeah, I got this comment elsewhere too. I decided to not edit, because I think current standards at least strongly discourage session id's in urls, and this functionality is basically obsolete.
Sessions requiring cookies felt like a reasonable approximation, and I've already had to hedge so many other statements =)
You can also just use Basic auth and get an Authentication header sent with every request. You know, what auth systems should have been using all along.
> If they’re not safe, what chance does the general (developer) public have?
Something I've been wondering is, do we really have any evidence that the average Auth0 developer is more of a security expert than any other dev?
I'm sure they have security experts on staff designing systems but it doesn't seem likely that everyone doing the building is an expert. They are as likely as any to code in mistakes
> Something I've been wondering is, do we really have any evidence that the average Auth0 developer is more of a security expert than any other dev?
Security companies have average developers just like everyone else. They probably do have better domain knowledge from working on their product but they will miss other important things.
The real difference in security comes from product management. Almost everyone I've talked to has some sort of risk assessment process where these sorts of things are laid down in great detail. The leading companies will invest in not only trying to discover weaknesses but will plan to mitigate or resolve the issues within a reasonable timeline.
1. I think it's important to compare the security issues that JWT has had to the alternative. There, 'vanilla' cookie auth (which I assume to be session IDs) is by far the biggest offender.
2. It's not true that JWT is not stateless and therefore irrevocable. RFC7519 defines 'jti' for exactly this reason:
> The "jti" (JWT ID) claim provides a unique identifier for the JWT.
> The identifier value MUST be assigned in a manner that ensures that
> there is a negligible probability that the same value will be
> accidentally assigned to a different data object; if the application
> uses multiple issuers, collisions MUST be prevented among values
> produced by different issuers as well. The "jti" claim can be used
> to prevent the JWT from being replayed. The "jti" value is a case-
> sensitive string. Use of this claim is OPTIONAL.
I think the rise of JWT stems from the fact that unlike traditional auth tokens (see the bundle of entropy legacy OAuth tokens were) they are human readable. When I set up an OAuth API that a user hits, I can exchange a short lived OIDC id token for a long lived capability based bearer for greater security -- and when developers use my API it's easy for them to tell why their capability token doesn't work without having to fire off a bunch of requests and guess.
Edit: it occurs to me that the question of the JWT being 'stateless' might be about how session based storage often changes, but a JWT is largely immutable. Just for what it's worth, a session should not be revoked simply by removing it from the user cookie, or an attacker can persist simply by refusing to allow the cookie to be removed.
> However, these new technologies create a lot more buzz than their simpler counterparts, and if enough people keep talking about the hot thing, eventually this can translate to actual adoption, despite it being a sub-optimal choice for the majority of simple use-cases.
I hate this about the Node/Javascript community.
Whenever I need to learn how to do something in Node js, 4 out of 5 blog posts or tutorials will be about some overly complicated technology or new, shiny thing instead of the classical way.
Man, I just want to do the same shit we were doing in PHP 10 years ago, but this time I want to see how to do it in Node/Express. Is that too much to ask?
Honestly, most Node/Express tutorials are 1:1 copies(try googling sometimes). Not Node/Express specific though, majority tutorials are material to signal author's knowledge towards potential gigs/jobs, not to share knowledge. Which is why a hello-world in Reactjs tutorial needs to setup Redux, or a java tutorial brings in SpringBoot etc..
I’ve spent the last couple weeks ripping out JWTs from a site and just using cookies for auth, being that the site and the REST APIs are all on the same domain & servers anyway, and doing all the OIDC/OAuth and refresh token handling on the (ASP.Net Core) backend.
I have to say, considering how “simple” some of this stuff is conceptually, it’s been surprisingly complex with a lot of nuance in areas. I can see why they say “never role your own security”. It is dangerously complex even to implement common patterns well. I hope future standards and frameworks can make this space simpler and safer.
If your end user is using a browser, the session/cookie is the right mechanism to use. Even though it seems more convenient to use JWT instead of a regular cookie, since JWT is not recognized by browsers, there are security gotchas when using JWT as a replacement for session.
JWT makes sense to authenticate microservices within a trusted network where a gateway server logges the user into the network using a session. Microservices within the network communicate via JWT where the user does not have direct access to them. There should be a separate server that does the lookup for specific services.
JWTs are great! You get to use your customer’s machine to store a lot of information about them and the session in a form they can actually read in most cases, and your don’t have to do a bunch of lookups each time they connect.
Sure, you have to keep a revocation list, but you do anyway, for the same reasons.
If you use short lived sessions with refresh tokens you get the Goldilocks scenario where a valid user stays logged in even when they are away for a long time, and you give them control over logging out instantly on all devices and access to the data you’re using about them.
> Experienced devs would likely feel that server-rendered HTML should probably be your default choice, and building an SPA when needed, [...]
Is there any good article why SSR is any better than SPA? I really do not see any advantages of SSR, especially for pages where you need to be logged in and data to be displayed is acquired via an API.
-On average less data is transferred - even in modern frameworks bundles still weigh more than multiple static pages.
-Small(or non-existent) JS bundles mean much less time/energy spent compiling - this is currently the largest task when rendering a page for the first time.
Both of these advantages are especially visible on mobile devices.
An insanity has just occured to me: you could update (i.e. regenerate) the user's ID to invalidate tokens, fixing the logout issue and keeping the architecture stateless :)
This implies you would also need to know if the user ID exists before you use it, which means a look up. (because it might be 'regenerated' and no longer valid).
If you take that one step further and just introduce disposable user ids, that point to a "real" user id, you've basically reinvented the session ID.
Sorry for offtopic. Can somebody recommend up-to-date book discussing authentication and access control methods in web applications/API, there strengths and drawbacks?
The benefit of JWT is only if you are using it in stateless manner.
If you are storing a session_id which need to be verified for each call, you can as effectively reusing it as a normal session cookie, with unnecessary overhead and payload cost. JWT is useful only if you trust it.
This is something I’ve gotten into fights with coworkers over when I proposed doing a similar thing. It was a weird circular argument and I’m very curious if someone here has a better opinion than, “JWTs can’t store state because they’re stateless!”
The whole point of a JWT is that it is stateless, and the information is (cryptographically signed) in the client. The advantages and disadvantages of JWT both derive from that.
If you want a session, just use an opaque session cookie. Then all information is safely handled in the server.
While it's certainly technically possible to use a JWT like a session cookie (or store session information in it), what you then get is a solution that combines the disadvantages of both approaches: All the complexity of making "visible" client-handled information safe, with the additional overhead of also having it in the server, while you loose the advantages of a JWT in integrating third-party services.
So, either use one, or use the other, depending on your requirements. Don't combine them.
Without offering an alternative solution I'm not sure what your co-workers are trying to accomplish.
The article doesn't offer much of a solution either.
The only valid criticism I see in these threads are unless you are tracking a revoke list then the JWT is still valid on a user initiated logout until it expires.
If you're going to roll your own tokens you're probably going to end up some form of tamper proof encryption even if you're storing additional session information on the backend.
Without specific comparison to other styles of session management this is just all FUD to me.
That's more or less what we ended up with.
We could choose one the following two extremes:
a) Store sessionId as a cookie on the browser, pull all details on backend from cache server - requires network call every time when any tiny user detail is needed, not a showstopped and it worked for us for quite a while but we wanted something more convenient.
b) Fully stateless backend, keep entire session in JWT - our user's session is on the huge, plus user's session has private data (we declined this option rather quickly)
Instead we ended up with JWT which contains sessionId + bunch of most commonly accessed user properties (various identifiers, roles, login time, etc).
Now it seems like that somehow I missed this "JWT means stateless" thingy, well, let's see what people say.
So if a user’s role changes (e.g. some privileges are revoked), they can still act in their old role until their token expires? Sounds like you have a security issue. Which website is this again?
Privileges for doing anything serious (e.g. perform payments or change security preferences) are not stored in JWT, all checks done on server side.
In case of emergency (e.g. we suspect user session has been taken over) we lock user account and purge session from the backend. Without session on backend - JWT won't do much.
It's a fintech consumer website (sorry, can't disclose the name).
In my experience Django Rest Framework with JWT + Vue or React on the frontend + some JS package that stores and refreshes tokens is solid as a rock and easy to implement/maintain. Your end users will never need to know what JWT means and you won't need to do any kind of token management. As far as I know, Instagram's stack is a variant of this.
7ish years as a security engineer and all my gut instincts tell me that you’re site has auth bugs just from hearing the spec. It might be perfectly secure, but if I was put on your project tomorrow I’d be budgeting weeks to review and work on that auth system. And even then it’s a foot gun waiting for other devs to screw it up later on.
The author says one of the selling points of JWT is scale, and points out many will not need to concern themselves with scale and to stick with simple solutions. But I am curious to know if any on HN is using session cookies at scale and how they solve the scalability issues of session cookies alluded to in the article?
A surprisingly small number of dollars per month will give you a Redis instance that will solve your sessions needs for a truly absurd scale. And Redis scales horizontally pretty well too. (I say Redis, but there's many equivalent tools that'll work just as well.)
Scale is relative, but...let's put it this way. If you're at the sort of scale where you might struggle to trivially use sessions at scale:
1. You'll know. If you're not sure you're at that scale, you definitely aren't.
2. You shouldn't be taking advice from Hacker News; go ask the large team of experts you've already accumulated about what works best for you. If you don't HAVE a large team of experts on staff already about operating at your massive, massive scale, then see point 1: You're not at that scale.
I've been heavily involved in an active-active session service that handles 100kqps+. It's not that hard to achieve with Redis.
If you model your sessions as an acyclical state machine, you can quickly consolidate a single view of all sessions globally.
Beyond this, it's valuable to be able to scope sessions, invalidate subsets of sessions, create delegate sessions, provide an audit trail for internal support staff using session impersonation, etc.
JWTs can't give you a lot of the rich security behaviors that real sessions can, such as asking a user to use 2FA for editing sensitive data if they haven't provided that information in the last 15 minutes. Or estimate user idle time.
You might want to invalidate sessions from a particular device, IP, or time period. Or, if you detect a user is compromised, terminate all sessions on their behalf.
You might want API and web sessions to have different properties, such as duration. Or limit access to certain endpoints based on device type.
You might have a mechanism to turn one session type into another so that your app can open a browser and have the user already logged in.
If your model has a superuser that has oversight into a lot of account-like views, you might want the ability to constrain it to a subset of permissions while handing it off to something else, especially if those sessions are longer lived. Or give it a subordinate view.
You might want to assign confidence intervals to sessions based on heuristics and ask for a second factor for operations that require a given score.
I think the whole "sessions don't scale" predates an era of cheap, shardable/distributable and performant options from NoSQL to RDBMS.
It used to be a serious bottleneck for even smaller sites. But they also used to stick them in stodgy, table-locking storage engines in the same database as the rest of their application.
Times have changed and I really believe you can manage cookies<->sessions without too much overhead. And if you're willing to let a JWT have a 5-minute max life, you could similarly cache your session responses for 5 minutes.
Like all things, there comes a tipping point eventually. But it's not as early or as drastic as it once was.
If you're looking for personal experience, I don't have it...But author does say "...but it’s common for sites to use systems like Redis and Memcached, which works for tiny sites, but still works at massive scales."
...which would be my first WAG if I had to design it for scale.
> it has a ton of features and a very large scope, which gives it a large surface area for potential mistakes, by either library authors or users of those libraries
I mean that could apply to so many things — PKI, Windows permissions, and countless other things. It definitely strikes a chord with me.
How about salt the token with an additional user-specific random string. During log-out, you replace the random string so it no longer matches. Then, we can have partial check (only signature validation and expiry time) and full check (compare with the salt hitting some store). Full checks can be done only for API calls which deal with sensitive stuff or/and for writes (writes are less common than reads)
For example, fetching an article on something like New York Times would only require a partial check, but the profile page would have full checks.
If you need to look up the salt for every user on every verify, you're wasting any caching advantage. If you cache the salt, you're reintroducing the cache clearing headaches.
Not on every verify, that's what full checks are for.
The salt isn't saved in the token for caching purposes, it is used to invalidate the token when a user logs out (detected only on full check when salts change).
Salt is stored inside the token like other JWT data so it can be ignored for non-sensitive calls (partial check).
The point is that you probably don't want a truly stateless backend.
What you probably want is to store a small amount of critical state in a very fast, very scalable KVS, allowing most of the backend to be stateless.
So much of the discussion around JWTs boils down to "I want a stateful backend, how do I use JWTs for this?" (Answer: Not well. Why use JWTs?) Or "I have implemented a stateless backend, I'm using JWTs, how do I work around the limitations inherent in being stateless without adding state?" (Answer: You can't. Why are you using a stateless backend?)
Thanks for the thoughtful reply. I’m thinking in an enterprise context, where I have many apps, many apis, many teams. Bolting auth to the gateway (with jwt) makes a lot of sense, and for very sensitive endpoints, maintain a blacklist.
The high-perf kv is nice and all, but also quite complex at “enterprise” scale (meaning, lots of apps/people not throughput)
That's fair. Although you can bolt auth to the gateway with sessions, too.
Perfectly valid design to inspect the request at the gateway, strip off the cookie, look up the session, then attach the session data to the request and forward it off to your internal micro services.
Authn at the edge makes a ton of sense when you hit a certain scale, or if you adopt certain patterns, but a lot of the mechanics (cookies versus other headers versus included with the request, signed payloads versus opaque tokens) are orthogonal to that.
Plenty of people have adopted JWTs because they're "better for microservices", then next thing you know they've got a 80 different microservices all independently checking the JWT against a centralised revocation list. (...well, you HOPE all 80 are doing that, but inevitably, some won't be...)
I think handing authn right is more about culture than any specific tech, really.
this author is a fucking genius. i always say that the smartest people can take complex subjects and make them seem simple. this dude turns sessions and jwt into childsplay. this guy is a fucking genius.
Why is "JWT" not defined or explained until I scroll a few pages down and find one that's hyperlinked? I thought it was talking about the James Webb Telescope this whole time.
The article and associated links are weak on details - most of them are some form of setting the alg incorrectly.
> First, it’s a complicated standard
Actually it's remarkably simple and very well thought out; I frequently generate them in plain bash for testing. It is the simplicity that makes me trust JWT (where appropriate).
> If they’re (Auth0) not safe, what chance does the general (developer) public have?
Auth0 forgot to check latter case in alg = 'none', a very basic coding error. These errors can happen anywhere, and anyway use a whitelist for algorithms. The other links are quite similar, such as using a weak alg.
> A last issue with JWT is that they are relatively big, and when used in cookies it adds a lot of per-request overhead.
Most apps simply put userid and roles in the JWT, and that's negligible overhead. Of course there are people who do it wrong, but there are also people who store 10MB of data in a session.
My view is that JWT has a place in the future, and it works well even with decentralized systems. The ecosystem around JWT is rich, and over the years has evolved to solve many common usecases which come up in production. An example would be the JWK/JWKS spec, which greatly simplify how verification keys can be fetched.
Most security auditors/certifications/compliance requirements do not consider even a 5 minute lag acceptable for user driven logout. The definition of logout is that when the user presses the logout button, all the session data is invalidated server-side during roughly the time it takes for a request/response roundtrip.
Thus there is no benefit to using a non-opaque high entropy string for user sessions, even though there is a good benefit to using JWTs as API tokens for back-end services (esp. microservices) that do not have browser logout requirements.