Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why JWTs Suck as Session Tokens (2017) (okta.com)
173 points by enz on Aug 30, 2018 | hide | past | favorite | 151 comments



I've read several articles along these lines now I tend to think the arguments are pretty weak.

In this media rich age, the data size argument is a bit silly.

The "you're going to hit the database anyway" argument whilst probably accurate in most cases, doesn't invalidate that JWT allows for one or more fewer database hits on every request.

Having built-in integrity checking is definitely a feature. Just because you can do it without JWT doesn't mean that it's not useful that JWT does it.

IMHO the biggest argument against the use of JWT is that you can't easily invalidate JWT sessions. Should you need to dump a users session or if the information contained in that session token has become invalid then you might be in trouble. For my use cases so far however this hasn't been a problem.

JWT is a fine solution for quite a lot of use cases. As with everything tech, just be aware of the limitations and choose wisely.


I've been fond of [1] and [2] for a more focused argument against JWT for sessions.

The JOSE standards (of which JWT is a member) are error-prone and have had numerous critical security-affecting bugs due to how they were designed. [3]

To remedy that, I proposed PASETO. [4]

I still don't recommend PASETO for sessions, because of the arguments laid out in [1] and [2].

[1] http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...

[2] http://cryto.net/%7Ejoepie91/blog/2016/06/19/stop-using-jwt-...

[3] https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba...

[4] https://paseto.io


+1 There are so many arguments against JWTs as session tokens. It's just so long and so much work to describe all of it and respond to each argument against each one. Sven does the best possible job of summarizing it up.

Also: PASETO is really fantastic, thanks for creating it! I've started mentioning it in my talks and using it for internal projects -- I really enjoy it so far =)


"alg": "none" comes to mind. Azure had it unpatched for close to a year. God knows how many corporate mailboxes and contact directories flew through that hole that went largely underreported. All what attacker had to know was the id number of the app that got previously preapproved, a trivial thing, given that it was given out in oauth requests for user permissions.

The industry will continue pay a dear price for mixing the rogue and unruly realm of web development, with what previously was a domain very conservative "enterprisey java development shops" where everybody dresses in a suit, and at least have some formal CS background above bootcamp course.


I concur with the author about session usage of JWT since it has little value (and if you did, just use it for it's signing feature and still have it just contain a random session identifier). The reasons for defending it in that use seem to be about what's not bad, but rarely about what's good.

But JWTs have value for API tokens as they can embed an expiration date for the caller in a known format. But the idea of stateless JWTs with a bunch of valid data for use on successive calls by the server is a bit much. You should contact your auth store of record per invocation for various reasons.

Like you said, be aware of the issues with chosen sig algorithms and be exact on what you choose and just leverage JWT as the format, not blindly following the generation libraries without investigation.


"about session usage of JWT since it has little value "

Two weeks ago, I stood in front of a room full of senior developers and architects and asked them "How will we avoid making a leading 'Does the user have permissions?' call or wrapping the request in a try catch in case the user doesn't have access without JWT claims.

They all got mad. Then they conceded that they don't have a solution. Do you have a solution? If not, you can't replace JWTs.


Even if we accept the assumption that you actually have to avoid it (I don't!),here are some:

- Database permissions, including row-level security

- Macaroons

- PASETO

- Fernet/secretbox/HMACing a JSON blob

These are different approaches so it's hard to summarize, but they all either give you a property JWT doesn't or avoid a flaw JWT has. You can't seriously believe that nobody did this before 2015.


This is a real production issue for me, so could you please elaborate on why you think one of these (whichever you prefer) is better or link me to a source?


We wrote a big-ol blog post about inter-service auth that has a lot of relevant stuff: https://latacora.micro.blog/2018/06/12/a-childs-garden.html

RLS means your database understands what a user is allowed to see and not allowed to see. It is usually much simpler to express authz in a SQL constraint than it is in code. It's also harder to forget, so you don't get an authz failure in like, one endpoint.


I wouldn't recommend Macaroons so lightly. I spent 2 months researching them for internal use, reimplemening them from the paper and comparing with existing implementations.

There are numerous practical issues with Macaroons: the caveat format is not standardized (both binary and allowed claim types), that matters a lot when using third party caveats. There are some de facto formats but they are full of issues (e.g. date format not exactly ISO).

Validation of a set of Macaroons requires walking a graph, that needs cycle detection, implementations that I've checked do not allow nested third party caveats, that invalidates one reason to use them.

Attenuation is nice but it doesn't play nice with third party caveats (hash is used as a key for decryption, appending to a third party caveat changes hash).

Then there are implementation errors such as: https://github.com/nitram509/macaroons.js/blob/master/lib/Cr...

Some of these issues could be removed by slightly changing implementations but the de facto implementation basically froze it. (then there is a fact that the de facto impl already changed some aspects from the paper, e.g. calculation of hash for appended third party caveat).


> We wrote a big-ol blog post about inter-service auth

Wow what a treasure trove of goodness. Thank you!


I will check it out, thanks.


I'm not sure how you intended this comment, but it reads a bit like a solicitation for lvh to do free work.

I wrote PASETO, with a lot of feedback from cryptographers and security engineers, to avoid a lot of the design flaws of JWT.

Learn more about it here: https://paragonie.com/blog/2018/03/paseto-platform-agnostic-...

You can find a lot of implementations available: https://paseto.io

(Also: You almost certainly want to use v2)


I wanted a link to your solution because I have never heard of it before. Not sure where you are getting the "free work" part of this from, unless you want to be paid per comment. Careful with the free advertising with PASETO there.

Joking on the last two sentences, but that's how you sound when you accuse me of wanting free work. Thanks for the links.


@CiPHPerCoder Looks like there is a limit to reply nesting, so replying here. Come on, I asked for a link or a description. I didn't even provide a single word about our stack. Nor do I know who the guy is, or anyone on this site, for that matter.


I'm happy to hear you didn't mean it that way.


> Not sure where you are getting the "free work" part of this from, unless you want to be paid per comment.

Oh, that's easy to explain. You said:

> This is a real production issue for me, so could you please elaborate on why you think one of these (whichever you prefer) is better or link me to a source?

Specifically:

> This is a real production issue for me,

If you want a cryptographer (i.e. lvh) to solve a real production issue for you, that would in most cases be a business transaction.


>If you want a cryptographer (i.e. lvh) to solve a real production issue for you, that would in most cases be a business transaction.

If we're being uncharitable, yes.

But the parent didn't ask the other to sit down and write code, or consult, or design a system for them. In the course of an already existing discussion on the merits (and the faults) of various session auth schemes, the parent asked the other person to elaborate on why he said something.

Which people do all the time without getting paid, and the grandparent was already doing (offering his opinion) anyway.

So that it's a "real production issue" for them is irrelevant. I participate in conversations all the time concerning something that is a real business issue for me or the others, and nobody feels like we should be getting paid because we have a talk. In fact half of the discussions on HN concern frameworks, tools, deployment schemes, etc, we use in production, and we have "real production issues" with and are interested in getting other's opinions in the discussion.

I think one can easily see how this is different from a proper consulting gig.

>"If you want a cryptographer (i.e. lvh) to solve a real production issue for you, that would in most cases be a business transaction."

That sounds like what some kind of caricature of a high street lawyer who charges from the first minute, even people they casually talk with. As if answering a comment on HN would equal to doing a consulting gig.

It's doubly uncharitable since the parent asked nicely and also added "or link me to a source". Should he be charged for a link too?


> So that it's a "real production issue" for them is irrelevant.

I respectfully disagree.

If it was truly irrelevant, it didn't need to be brought up in the first place.

But it was, and it's what made me believe that the other person was trying to solicit for a security expert to solve a production problem for them without an invoice being involved. It was relevant to my interpretation.

They insist they didn't mean it that way, and I believe them, but it was still relevant.

Whether or not it was relevant in the comment I replied to, it became relevant once it was entered into the discussion.

You can call that "uncharitable" if you want. I don't really have a horse in that race.


>If it was truly irrelevant, it didn't need to be brought up in the first place.

It's not like people must have some hidden agenda, or that they necessarily consciously "bring things up" with some ulterior motive.

The parent just shared some context, that he has an issue related to the discussion. If anything, if they really had some hidden motive, like getting market-worthy consulting as part of a HN comment reply (!), they'd have, well, hidden the fact that they have this problem at work.

It's totally common to casually mention that "you know, this issue we're discussing on this thread I also have a work, and why do you say this approach sucks and which do you then suggest".

In fact it happens all the time on HN, between regular developers, the occasional star scientist or programmer (from Alan Kay to Ryan Dahl and Dan Brown), and even far more important and busy pros than some security expert, and I've never heard anybody counting their lost pennies from what they'd have gained if they charged for talking to them...

It's also not like the person the question was addressed to can't handle the matter themselves, and e.g. not answer if they fill they need to be paid for their musings...


> It's totally common and perfectly innocent to casually mention that "you know, this issue we're discussing on this thread I also have a work, and why do you say this approach sucks and which do you then suggest".

The structure of the comment in question is also relevant:

"This is a real production issue for me, so could you [...]"

This reads like a demand if you parse it in spoken English.

If it helps, imagine working in retail and hearing a disgruntled customer ask you to hurry up and give them priority service. Their request might be structured like, "I need to pick my kids up from school at 3, so could you hurry it up?"

Your word choice is far less demanding than theirs. If they wrote their comment the way you just wrote yours, it wouldn't have struck me that way.

The entire reason I brought it up was because I was unsure of the intent. They clarified their intent. I believed them. Life moved on. You're still trying to litigate this.

But none of this matters. What matters is, they didn't intend it that way, and JWT sucks.

> In fact it happens all the time on HN, between regular developers, the occasional star scientist or programmer (from Alan Kay to Ryan Dahl and Dan Brown), and even far more important and busy pros than some security expert, and I've never heard anybody counting their lost pennies from what they'd have gained if they charged for talking to them...

To be fair: me neither. But knowing how humanity is, I wouldn't totally discount that it does happen somewhere on the Internet (maybe even HN).


> How will we avoid making a leading 'Does the user have permissions?

You don't avoid it, you do it. Gathering simple user info including permissions should be the first step at the request boundary and it can traverse the life of the request. If you foolishly use a stateless token to read permissions, you're gonna be annoyed when changes you make don't take effect immediately. I trust your seniors know this (of course different situations and caveats apply, but this is referring to the general approach).


I am well aware of this. What you are advocating is an unnecessary database call on every single request.


Generally a big DB query is made when the user enters the site or makes the first API request to load all relevant user data. Then the data is stored in a cache system, which will be faster and cheaper than having the user send the data on every request.

Most modern web frameworks will do this out of the box with a little configuration.

Specifically, I've had good results with Redis, applicable for the majority of web apps. In some rare cases where even local network is too slow, use of local memory storage has worked well (at the cost of greater memory usage and more complex invalidation procedures).

But before optimizing user permissions, in my experience there is much more to be gained from optimizing other types of DB queries.


That's exactly what I'm advocating, it's not unnecessary, and should be trivial. If you are afraid of overhead, you can have a caching layer ala Redis or Memcache or even local process memory (but don't eagerly optimize, the costs are trivial to make this call). Either way, you need to communicate with an updated cache or a user record of store. Don't trust information sent to you from the client because it may not be accurate with updates (there are exceptions to this when you can tolerate staleness, but for the general approach this is what you want to do).


What's wrong with a db call on each request? A single server can handle over 10,000 of those per second..


But... that's not really something you _have_ to avoid. Check permissions, if they fail the test -> http401 (for an API) or some user-friendly redirect. Something similar to this is how things work without JWT currently, so it's only a problem if you make it one.


You seriously think making a redundant call or wrapping every. single. controller into a try-catch is better than having claims pulled out in a request pipeline (before even touching the controller) and doing `if(hasAccess){do thing} else {unauthorized}`?


It sounds like you're arguing from a very specific mental model of an ACL workflow.

In my CMS, I had support for granular permissions. So you could do this:

  if ($user->can('update')) {
    if ($postData) {
       $this->processUpdate($postData);
    }
    // display edit form
  } elseif ($user->can('read')) {
    // read-only
  } else {
    return error_403_condition();
  }
JWT wouldn't have helped much.


I will look into this more and come back with what I figure out later on. Thanks.


If you're not willing to risk users making requests with stale permissions (which is a risk you shouldn't accept lightly), then JWT requires that you hit something at the start of processing every request anyway. It can either be a token blacklist service (really just a key-value lookup), or it can be an auth/permission service.

The auth service/query is higher per-request overhead, but it also keeps things simple. And simple is what you want unless you're dealing with ridiculous scale.


I don't know what you're working with, but you don't need a JWT to figure out access in the request pipeline. A session ID does the same thing and allows you to associate a session to a user, and most frameworks are capable of doing this already (e.g. django has middleware that can do this)


That looks similar to a try/catch... You just called it if/else.

Also, why can't you make the database request in the request pipeline, right before that "if(hasAccess)" statement. You don't need JWT for this...

You are already wrapping all of your controllers...


If your controllers are asp.net mvc controllers you can decorate them with permission attributes (see the relevant docs for your version).

Pretty sure most frameworks have a way to structure your code for permission checking.


For signout we use token and user revocation lists represented as bloomfilters that services can poll or query directly if they need more consistency. This actually does work well in practice at very large scales efficiently we found.


> The "you're going to hit the database anyway" argument whilst probably accurate in most cases, doesn't invalidate that JWT allows for one or more fewer database hits on every request.

Using random session cookies does not prevent you from using caches.

> In this media rich age, the data size argument is a bit silly.

JWTs aren't cached, and cookies are sent on every request, so there's a much bigger multiplier on JWT size cost than there is on media size cost.


For invalidating, you just need to store the JTI (id, as a UUID) of the token in the DB. On challenge, you just compare the JTI of provided token to the one stored, and if there's a mismatch you 401. For doing the actual revokation, you can just clear the active JTI and have app code that would reject that. It's certainly arguable if this is worse or better than session tokens, but it's not unsolvable.


It's not unsolvable, just incompatible with the premise of "use JWT for sessions" which is to avoid the database hit entirely.


Yup, I can't believe the author didn't go over something like this. It's the perfect use for the JTI. Put it as a column on your users table and all set. We use a Ruby authentication library that does exactly this.


I have trouble understanding why session invalidation with JWT's is challenging. The JWT itself contains an IAT (issued at time) describing when the token was created. To invalidate all sessions, you can store (in your DB, or other persistence service) a 'invalidated at' date. Any request with a JWT that was issued before this 'invalidated at' date can be considered to be invalid.


Sure, but then you’re calling your DB to validate the JWT every time anyway. You’ve just removed the whole “JWTs are cool because they’re stateless” benefit.


I’ve gotten used to using the Amazon style of using an accessToken and refreshToken. The accessToken expires in 5 minutes and the refresh is used to get a new access token. So at most you have a 5 minute window of acesss.

More boilerplate Code and bandwidth but it works fine.

My issue with the article is that we’ve generally stopped building our own auth and now lean into using cognito or auth0 for authentication for the sites we build for third parties. Those services provide so much more out of the box then a home rolled solutions (MFA etc).


> I have trouble understanding why session invalidation with JWT's is challenging.

Imagine you're connecting to a service that uses JWT for sessions so they don't have to store anything server-side.

Let's say you have a token with an IAT of yesterday and and EXP of, say, a month from now.

Further, your browser gets infected with malware and the attacker steals your token. You rebuild your computer from a fresh install.

How does the service invalidate the token while still being stateless? It has 30 days left.

Your next move is:

> _


You should have tokens with short durations that are refreshed upon use. That greatly reduces the odds of a token being stolen or leaked and then used before it expires.


You're getting closer to the problem:

JWTs were designed to be single-use (or very, very short lived) claims (with optional cryptography features).

It was never meant to be "offload everything to the client and obviate the need for server-side storage". It was never meant to be the new hotness among the NoSQL Scalability crowd.


But that's just it- only for all tokens.

What if a user changes their password? Until that token hits its timing limit, they've got free reign. Or, you use denylists, requiring the database again.


The problem with blacklisting is that then you end up doing the database hits and avoiding those was one of the reasons for adopting signed tokens.

I see the value of signed tokens in complex infrastructure where you want to have one heavily guarded system doing the authentication and token assignment and then a bunch of other systems just validating the tokens.


>> IMHO the biggest argument against the use of JWT is that you can't easily invalidate JWT sessions

That's true. JWT is great for WebSocket connections because you can make the expiry really short (you could even make it 1 minute) and you could re-issue a new token in real-time every 50 seconds just before the previous one expires. Then if the user goes completely offline for 1 minute, then they lose their session (the JWT becomes invalid).


I think the most important part of that is "you're going to hit THE database". A single session token is more efficient but tightly couples all services. There's a single point of failure at whichever server does auth. Unlike JWTs, degradation or outages in token servers ripple through the whole system.

So when would you use session tokens? When you have a small application that you confidently predict will never outgrow being a monolith.


"In this media rich age, the data size argument is a bit silly." I couldn't agree more !! Its just so much easier to do JWT specially if you got more than one server(api)


Not a huge fan of this article (despite being a huge fan and consumer of Okta). The author is making a number of assumptions that are bad architectural practices these days. These include: 1) Stateful services everywhere (assuming you always have a database) 2) That you are only doing interactive sessions, and not web services 3) That you have a single issuing authority that is non-federated

For stateful CRUD applications that don't need to scale, and have a centralized authority, by all means, use session cookies.

If, on the other-hand you have a application that needs multiple authentication sources, has a API on the other end that doesn't want to know a separate authentication path, or run stateless functions, JWT is a fairly good choice.

For most modern applications, session cookies create far more problems then the simple universal decision to use JWT across the board.


The one example this author gives for an appropriate use of JWTs isn't really any different than his previous ("bad") examples. In it, he suggests the case of an ensemble of services coupled to an authentication service, with an SPA front-end. The front-end obtains a JWT from the auth service and uses it to make API requests of the rest of the ensemble. Because the requests are "small" and "frequent", there's a savings to be had in using stateless authentication for them.

But SPA applications (and, for that matter, classic AJAX applications) have been making small, frequent API requests for 15 years. There is a standard solution to the problem of repeated session database lookups: caching. And in the unlikely instance of an application with significant enough usage that session lookups are problematic but no preexisting caching strategy, the standard answer here is simply to move sessions to a fast lossy database like Redis.

All the previous problems with JWTs remain in play in the author's "good" example.

There are better reasons to avoid JWT than are provided in this article, but it does a fine job of communicating the fundamental nut of the problem. You can easily do better, both for performance and for security, than JWT does. Chances are, your framework's existing session store already is better than JWT. Rails, for instance, has had stateless signed cookies for something like a decade.

We recommend you avoid JWT. If you want to be cool and use a non-default session token format, look into Macaroons.


We see JWT a lot. It's usually an accident and not really solving a problem.

Some notes for this blog post:

- It does not cover the worst issue with JWT (IMO): you don't get revocation by default. Some vendors use JWTs and still have revocation, but they do this via CRL management. CRL management is not easier than a session in a database.

- In the context of OAuth2, JWT is often a bearer token. This introduces a number of subtle flaws that OAuth1.0a did not have. Notably, in OAuth 1.0a, if I steal your credential for service X, I can't do anything with that without also having service X's credential. In OAuth2.0, it's a bearer token, so game over. You could argue this is an OAuth2.0 flaw -- or perhaps not even an OAuth2 flaw, because OAuth2.0 doesn't force you to do this. But OAuth2 and JWT make it the obvious choice, and so this model is now ubiquitous.

- It uses the phrase "signing" a lot. In JWT, signing usually means MACing, specifically with HMAC-SHA256. in other contexts "signing" often means using asymmetric cryptography: a separate verification and signing key. JWT supports both -- because JWT, in its folly, supports everything.


Comments ITT are not particularly charitable. The author says:

> It’s important to note that I don’t hate JWTs. I just think they’re useless for a majority of websites.

...and lays out a nuanced perspective of when they're an improvement over cookies at the end.


I think anyone who gives an article the now cliche "Why [popular thing] sucks" doesn't care about receiving charitable responses.

His use case at the end is describing microservice architecture. Microservice architecture is trending, ergo JWTs are trending. I don't see the problem.


This is ridiculous. It's 2018 and you're complaining about a few MB of bandwidth and a few seconds of CPU time? The whole point of JWT is that they are basically cookies for places where using cookies uniformly is not feasible. The author's solution is to go back to using cookies, which just doesn't work well enough. I'll stick with tokens for now thanks.


> ...a few MB of bandwidth and a few seconds of CPU time

Many people are on cell phones with low data limits. A "few MB", let's say 3MB == a few, represents 0.1% of someone's data limit. Sure, it's not that much on its own, but it adds up.

Likewise, a few seconds of CPU time is fine if you're on the latest iPhone, but if you're in a developing country on an inexpensive Android phone that few seconds of CPU time is going to turn into a world of hurt.

This cavalier attitude towards bandwidth and CPU time is outright hostile to certain classes of users.


The article mentions 100k page views being ~24MB extra a month, which means we are talking about a token of ~240 bytes. So for a single user you are talking about several kilobytes if they are multiple views to the server, which is now several orders of magnitude less than your original estimate.


For this single thing, this is not a big deal at all.

I was objecting to the "It's 2018 and you're complaining about a few MB of bandwidth and a few seconds of CPU time?" statement, not the technical detail of JWT adding an extra ~240 bytes.

I've seen statements similar to this applied to everything from big JavaScript libraries to large "Hero Images" to 2MB GIFs embedded in pages. It's a poor argument and it's representative of an attitude that's hostile to users.


The problem is you are talking about a different problem than the OP. The OP was talking about a few MB and CPU _from the server's perspective_, while you are talking about a few MB and CPU _from the client's perspective_. Yes it would be bad to willy-nilly force clients to take on a few MB per request, but that's not the issue being talked about.


This. Often I've had discussions about APIs that take 100ms or more to return a result where the person writing the API and even the product manager do not understand that this response time is likely too long. Going back to the 1960's and the PLATO system, engineers recognized that humans need a response in 500ms or less whether visual, audio, or haptic, to inform them that the system received the input. Therefore, to give a user that same 500ms response time today across the Internet, not just across the room to the mainframe, requires understand the entire latency chain. One approach is to consider that any interaction has a 500ms budget which cannot be exceeded and then start subtracting out the various latency components. Round trip time across the USA, 150ms. DNS, Connect, HTTPS negotiation, TCP setup, etc.. 25ms. Suddenly your down to to 300ms of remaining budget. Lets assume that 5 service API calls need to be made internally to provide the response, 300 / 5 = 60ms avg budget per API call. I'm going to tell you that with today's CPU/RAM/SSD speeds, 60ms is a huge amount of computing time for a reasonable request.

tl;dr, remember the 500ms overall budget for the humans at the end of the pipeline. No one anywhere said I want my response time slower.


1/3rd a kilobyte to the user per page request. The MB and CPU time are to your server. Over the entire month. I'll pay that penny.


> It's 2018 and you're complaining about a few MB of bandwidth

Ah, yes, 2018, the Year Where Everything On The Internet Is Connected To A Cable.


It's 298 bytes per page request given the example he gives (your token can be smaller or larger depending on the data in it).

It's 24 MB of bandwidth to the -server-. Your server -better- be behind a decent broadband connection, or what are you even doing?

1/3rd a kilobyte per page request to the -client-. 1/3rd a kb is a blink of an eye even on modern dialup.


Why do you think it has to go skyrocket forever?! I am already really disgusted by how big applications are nowadays, how bloated, slow, memory-hungry and inefficient they are. Every saved MB counts.


In this case the article is talking about 24 MB of additional bandwidth used by the server to serve 100k pageviews using JWTS


A few extra MB of traffic is fine in a datacenter. Not so much for a mobile client somewhere with a weak signal.


Very weak reasons for not using jwt. If 24 megs a month is the reason you could easily remove an image from a high traffic page to get the sake savings.

You are hitting the database anyways reason applies to a limited subset of of use cases.

Sessions are simplier use them on multipage websites. SPA or apps jwt is perfect.


I like to keep things lean where I can, but saying I have to pay 24mb per 100k requests doesn't sound like a huge deal. And like you say, keeping an eye on your image sizes will net you much bigger savings.


One of the biggest downsides in my opinion is that session invalidation becomes non-trivial. Your best bet (assuming you don't want to do any additional network requests) is to reduce the session length to the smallest amount you (or users, depending) will tolerate and perform some kind of re-authentication; i.e. force a logout and do a fresh login or check if they can get a new token based on the old one transparently. For example, a user changing their password should kill all tokens that are in use immediately for good security. You can't do that with JWT. All tokens will stay valid until they expire.


I haven't used JWT but the way you solve this is by having a refresh token that lasts several days that lets you "login" without a password. The refresh token is then used to get the real session token with has a low expiration, perhaps 5 minutes. When the session token expires you just "login" again.

But honestly I don't see the need for the vast majority of applications. Most frameworks cache the permissions, etc on login so the database doesn't have to be accessed on every request.


« If we store the ID in a cookie, our total size is 6 bytes. »

No. What? No.

You can't compare a signed JWT with a bare user ID. The proper comparison is a signed JWT vs. a cryptographically random session identifier with sufficient entropy. It's still smaller than the JWT, pretty much by definition, but make the right comparison, please.


He -does- indicate "For storing a simple user session, that is a ~51x size inflation on every single page request in exchange for cryptographic signing (as well as some header metadata)."

So he admits that the signed part is the tradeoff. But I totally agree with you, barely mentioning that, when that's the whole point (JWTs are used for authentication/authorization, not just easily faked identification), is incredibly disingenuous. Never identify users on the client side with something that isn't cryptographically secure. Somewhere, you or another developer will end up implicitly believing that ID is trustworthy, and you just introduced a critical security flaw.


A pretty common approach is to just give the user a version 4 (random) UUID as a session ID, which is 36 bytes in standard format. 304 / 36 = 8.4x, which is a much less impressive inflation.


The offered solution to use a centeral memcache/Redis server is exactly why we moved to JWT.

First memcache and then Redis sessions were a continuous centeralized source of failure we could remove, and honestly didn’t scale very cleanly approaching millions of users.

Removing moving parts from our stack was a major win.

That’s not to say JWT is without thorns, but overall it’s better than a centeralized point of failure.


I am curious about your experience with Redis and millions of users...what were the issues you experienced?


In particular we weren't storing a ton per session, but were having to spin up more and more instances or suffer port exhaustion.

We go from about 25,000 request a second during the day to 200 at night, and there was no good way we could find to autoscale it.

We scripted the build up process based on time and sometimes it wasn't meeting demand.

It was taking a lot of devops time. I'd had an experimental fork of our application using JWT floating around for a while and my manager made it a priority. It's never needed any maintaince and had been a real improvement for us.


Biggest complaints about JWT is the inability to sign out a user. You can, yes, delete the cookie on the client side, but if the JWT is stolen you can’t be sure that it won’t be used again ( invalidate ).

Same happens on reset password, where by the book you need to sign out the user from every device. Good luck with that having standard JWT implementation.


If your using a salt, just change the salt and deploy. We frequently do this and it works great. All existing tokens are expired.


So you invalidate all tokens? Or it is per user ID salt. If the latter where do you keep the salts and why not simply having a server side session ID, instead of the salt


If you have your tokens take more than an hour to expire, you're doing it wrong.


Refreshing a specific token can not be blocked on the server side. If you do, it’s already stateful.


I am of the anti-JWT crowd, so I completely agree, but this would still only hit a "login service" which would know if a new JWT could be given to this token.

Still, an hour is a LONG TIME when a session could just kill all requests immediately for a user.


If that's the case, then why refresh? Might as well just hand out forever tokens, which, you're right, makes less than zero sense.

The point is that you have two tokens, one a refresh token and one a stateless token. You revoke the refresh token on the server, which means the next refresh attempt will fail.


Then why not just use "refresh token" as a session token?


Because it's stored and that would require a round trip to your auth server on every request. Plus, its security requirements are far higher than the access token, so you don't want to be flinging it all over the internet. It only ever goes between your users and your auth server.


My employer just started using Okta and I find it hard to take anything they say about security seriously after seeing some of the obvious bullshit in the immediate user-facing aspects of their product:

- allowing security questions to reset passwords

- allowing sms resets of passwords

- allowing sms 2-factor in combination with sms resets

I understand that my company is largely at fault for enabling these, but it's scary that they're even options. It's also pretty stupid that their password requirements are 8 characters, mixed case, and at least 1 number.


(Disclaimer: I work at Okta) One thing I've learned at a visceral level is how different threat models are from one organization to the next.

And while I agree that the things you brought up don't meet my personal standards for security, there are many organizations where those features are acceptable given their use case.


Under what threat model do you need 2 factor authentication but it's ok if both factors can be bypassed by SMS? There may be some legitimate use cases, but I don't think companies that need to farm out their security to a third party will usually be in the best position to make this call.


As I recall, for some consumer facing applications, the concern isn't about targeted attacks, but more about automated attacks that try to re-use passwords. In a scenario like that SMS is to slow down automated attacks.

Keep in mind that I don't endorse that approach! Just giving an example of a narrow case where that applies.

That said, I appreciate your feedback and will personally take your feedback to our product group.


That's a good answer, but it doesn't sound like a reasonable threat model for corporate single sign on.

Thanks for taking my feedback to your group. I hope it helps.


How does it feel to have tighter personal security standard than your employer, an employer who specializes in security? Are you trying to improve the situation?


I'm sorry. I didn't mean to imply that Okta holds ourselves to a security standard that is lower than my own. What I was trying to say is that the reality is that some organizations do not need (or want!) a high level of security. For some organizations, security questions to reset passwords is an improvement over past process (!)

Naturally, as a company that specializes in security, we have a unique threat model and do not allo SMS resets of passwords, security questions, etc for our organization.

If you're genuinely interested in learning more, I'd suggest looking at our security certifications: https://www.okta.com/security/ or reading the blog posts by our in-house security team: https://www.okta.com/security-blog/


Thanks for the clarification!


PW requirements are defined by your company as well.


Now I'm even more disappointed with my workplace :(


The reason why you should limit cookie sizes is request round trip.

A typical GET request involves multiple headers, including cookies. Cookies are sent for every request for any resource for a given domain. The more cookies you have, and the bigger they are, the bigger your request. The bigger your request, the more likely it is to take longer to send and receive it. The more of them you send, the more latency accumulates.

When The Cookie Crumbles https://yuiblog.com/blog/2007/03/01/performance-research-par...

Reduce Cookie Size https://developer.yahoo.com/performance/rules.html#cookie_si...

Performance Limits https://stackoverflow.com/a/6963585


The model the author is complaining about is far too widespread unfortunately. I've seen a much worse incarnation at a huge corporation that I won't name but whose identity sleuths can doubtless figure out.

Here's how this goes:

When a user authenticates, the system sends them a public key encrypted blob inside a cookie. When the user makes further HTTPS requests the blob will be sent back, the system can decrypt it with the private key and get back the data inside. Components of the system would squirrel away per-user data inside this blob and so the cookie might get updated as they surfed around the site or did things.

One day, people behind a local component using this contraption asked me for some "advice". They'd "filled up" the blob and there didn't seem to be a way to add more data but they needed to store something else, I worked for a different part of the company but I knew about cryptography, what should they do now? They'd found that there didn't seem to be a "cipher mode" option for public key cryptography...

And so that's the point where I couldn't decide whether to laugh or cry, after that I spent a few hours on international conference calls basically telling people that they're idiots and nobody should have even _designed_ this let alone built it.

Part way through that I explained that encryption doesn't magically mean bad guys can't change the inside of the blob AND that it doesn't magically prevent bad guys from stealing blobs they found in one place and using them elsewhere. So what is this even for? In mitigation the engineers who built it explained that, er, actually the first thing they do with the blob after decrypting it is to check their session database to ensure all the data matches up, if not it's invalid and the user just gets logged out silently. Thus, at the end of the day, it's just functioning as a session ID checked against a database anyway. All this cryptographic effort is in fact completely wasted and achieves nothing in practice except to make the system far more complicated and fragile. Brilliant. /facepalm


Sounds like a good way for job security.


JWTs are good for a bunch of things, one example being when you do NOT want to hit the database on every request, to speed up latency.. eg. in high-throughput IoT setups where you want to validate or discard the request before touching any DB at all


With http 2, repeatedly sending the same header value over a persistent connection should cause just a pointer to the previous value to be sent so I don't really buy the size argument here.


The unencrypted nature of these is something that gets overlooked a lot of times.

If you are transmitting metadata about some item, such as a song or a video, that’s one thing, but when it’s user info and the payload is not encrypted, you end up essentially leaking that data.

Other issues arise when you throw in logging and crash reporting - you may not even realize that that JWT session token just got logged and now you have user data where it doesn’t belong.


Most people here seem to fail to understand that apps rarely get slow and bloated through a single decision. It is tons of 1% or 0.1% decisions. Reading arguments here that this decision doesn't matter for size reasons, decoding performance, etc, is silly. You can argue the same thing about any of the other 100 individual decisions that, in the aggregate, made your app slow and bloated.


Besides various other issues (design and implementation-wise) and the fact that it's a useless optimization anyway, minting tokens is just such a headache in general. Avoid it entirely if at all possible.

If you think you need JWTs or similar tech, you don't need JWTs or similar tech.


Thought this was the panacea for serverless authentication


What would you recommend then?


Read some bytes from /dev/urandom to hand out as a session ID and put them in a database.


EDIT: I misunderstood what was meant by minting tokens, my entire comment is pointless. Save yourself the time.

What's the headache about minting tokens? You create a new uuid() on the server, store it in the DB in its own table which has references to accounts, keep an isValid field next to it for server-side invalidation, and store it in localStorage in the client. Is there a security flaw or something in this?


So you've minted a token, stored that in the DB as well, and every time you see that token you verify the signature and then you look the token up in your DB to see if it's still valid. You do realize that at that point you could've just handed out a random string instead and avoid (1) creating and verifying signatures and (2) the size inflation created by those signatures, because you're not even getting any of the theoretical advantages of minting tokens.


If you're just handing out the random string without signing it (or performing some other constant-time comparison when validating), you're vulnerable to timing attacks


It is a random string, I just created it with uuid(). Nothing was ever verified. Is minting tokens an official term that has to do with JWT? I thought it was just short-hand for the process I just described.


Minting tokens specifically refers to JWT-like constructions AKA "[probably-RSA-]signed cookies".

Generating a sufficiently (16-32 bytes) long string of randomness and using just that as a session ID stored in a database is a perfectly fine technique, scales well enough and is quite hard to get wrong.


When you hit a critical size your database can no longer handle the throughput of the lookups. You'll then install memcache or some other tech and the stack JUST FOR handling the tokens is a majority of your datacenter load.

Most startups and projects never hit this size, they usually fold before that level of growth. It is much lower than one would assume though since every request made to an API has to do the lookups etc.


> When you hit a critical size your database can no longer handle the throughput of the lookups.

If your data store(s) can't handle the load of looking up a ~32 byte token (that is, if you are sane and not using JWT), then how exactly are they supposed to hold up to whatever your business logic part of the app is doing?


localStorage can be vulnerable to XSS attacks in cases where cookies are not.


OMG, where do people get this info? It's BS.

As an attacker, if I have successfully injected my JavaScript code into your webpage, I can make HTTP requests on your server to do whatever I want with that user's account (their cookie containing their Session ID will automatically be attached to those malicious requests; so they will look like real requests from that user).

And yes, this attack also works with httpOnly cookies; I don't need to be able to read the cookie in order to use it.

The httpOnly flag is practically useless; I don't think any hacker worth their salt would want to steal session IDs for later use (session IDs and JWTs have a way of expiring quite quickly); usually with an XSS attack, you want to do the attack in-place from inside the victim's own browser.


You can put a JWT in a cookie :V


I've read so many of these anti-JWT articles, and I don't think I've ever come across a single reasonable complaint against them. What is wrong with people?

I've even read security reports by supposedly reputable security consultants also claiming that JWTs are bad but their arguments are hand wavy and make no sense.

>> Let’s say that your website gets roughly 100k page views per month. That means you’d be consuming an additional ~24MB of bandwidth each month.

Negligible.

>> You’re Going to Hit the Database Anyway

Yes, but less often. So your DB will be able to service more users overall. For some use cases, we're talking orders of magnitude more users.


> I've even read security reports by supposedly reputable security consultants also claiming that JWTs are bad but their arguments are hand wavy and make no sense.

https://auth0.com/blog/critical-vulnerabilities-in-json-web-...

https://blogs.adobe.com/security/2017/03/critical-vulnerabil...

These were critical vulnerabilities enabled by an error-prone cryptographic design that broke real systems.

Don't pretend it's hand-wavy.


These are vulnerabilities in specific implementations of JWT. It doesn't make JWT unsafe as a whole.


> These are vulnerabilities in specific implementations of JWT.

No, these are vulnerabilities in the standard itself.

I outlined the arguments here: https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba...

It's an error-prone cryptographic design that needs to be replaced.

Blaming implementations for faithfully implementing a flawed standard is a stupid thing to do, since it doesn't solve their insecurity.


It definitely was an implementation issue. The JWS spec (which is explicitly referenced in the JWT spec) states clearly under section 4.1.1:

'The JWS Signature value is not valid if the "alg" value does not represent a supported algorithm'

Even if you did consider it a flaw in the RFC itself, the fact that there was a flaw once-upon-a-time with a specific aspect of JWT, doesn't invalidate the whole idea of JWTs.

I don't know any major standard that was perfect from day 1. This can be said about TCP/IP (e.g. IpV4 addresses were clearly a mistake). Also, I recall that there were flaws with HTTP1 and that's why HTTP1.1 was released soon after. The WebSocket protocol also went through MANY iterations.

There are use cases were JWTs are necessary. For example, I did some work with real-time presence (to get notifications when users go online or offline); to be able to get the username from the JWT instead of the database saves a lot of database queries and the code is much cleaner since you can check synchronously instead of having to wait. Also, you don't want to waste precious CPU time doing DB queries for connections that haven't been authenticated yet.


> Even if you did consider it a flaw in the RFC itself, the fact that there was a flaw once-upon-a-time with a specific aspect of JWT, doesn't invalidate the whole idea of JWTs.

The premise that went into JWTs was not invalidated. Their design was proven by multiple incidents to be error-prone, so I sought to replace JWTs.

The result? https://paseto.io


Inability to do immediate revocations is a real drawback and a valid complaint.

If a trusted machine with a privileged token on it is compromised somehow, it will likely take the attacker some time to search for and discover the token. If you use centralized auth and you realize there was a breach before this happens (say from monitoring/anomaly alerts), you can revoke the token and prevent unauthorized access. If your token is JWT and it doesn't expire for another 45 minutes, you're in much worse shape.

This isn't the only factor to consider, but it's an important one.


Inability to do immediate revocations comes with all stateless, signed tokens (for eg: the way Rails uses cookies). I don't understand why JWT gets the bulk of these criticism when signed stateless cookies are quite prevalent.


Well, one argument against cookie based sessions is that they get send automatically which opens a large attack vector for CSRF.

(yes, you can store JWTs in cookies too, but that is kinda uncommon)


Storing JWT's in a HttpOnly Secure Cookie is common and recommended for Web Apps.

https://stormpath.com/blog/where-to-store-your-jwts-cookies-...


Very good point, thanks for the writeup. Maybe JWTs are good for specific cases but as most people relally use DB for other stuff anyway, it doesn't really bring up any practical advantage. They are just "cool" and look clever so people probably started using them without proper reasoning. This resembles NoSQL and an attempt to burn oldschool SQL servers with hopeless attempts to go back later where more complicated "join-like" requirements emerge.


That was a whole lot of chewing for very little meat. If this article were a tweet, it could've been "use cookies instead of JWT's bc is if all you need is a UUID for a database lookup cookies work just fine, and are prob encrypted using symmetric encryption just like JWTs."

Respect to the author for trying to generate some content for the company blog, but the end result ultimately comes down to a relatively trivial decision in order to save a few bytes.


The main and only argument against JWT tokens for me in context of user sessions is that I cannot revoke them if I block user account.


Have a relatively short expiration time.

Or, if you want to have more immediate revokes without having a ridiculously short expiration time, keep a list of blacklisted tokens that you clear at least every <refresh time> seconds.

At the end of the day, JWTs still have to be accepted by the server, which you have complete control over.


I can't run short expiration times. Well, If I am running distributed system then blacklisting dosen't solve anything for me. I would need to sync blacklists or keep them in database which defeats the purpose of using JWT for sessions already, as I could do what I do now, just keep normal tokens in Redis and check them on every request.


Assuming blacklisting happens infrequently, you would have a few orders of magnitude fewer entries and could keep them in-memory on each server and have them fetch them all once every X seconds instead of for each request.


A lot of assumptions there to justify using JWT tokens. Unfortunately I can't allow someone that is blacklisted to use the service for X seconds.


Well you can block them, it just means you lose advantages of a JWT, like not requiring a database hit.


This is the main issue, there is no point in using JWT tokens for me in context of sessions. I can just use standard solution and keep normal tokens in Redis if I need to make db call anyway.


Really, why ppl keep saying JWT cannot be invalid? isn't it simple enough to use a redis bitmap to store the info? each token only takes one bit, how many tokens you can have, 1 Billion? 120MB is more than enough. What you save here is the time and resource you hit the database.


If you have a revocation list for JWT tokens, you could just as well just use session IDs, avoiding all issues with JWTs.


But you can not store anything in your session ID. JWT can carry a small amount of data that's need by my service. I only need to validate JWT and check if it's been invalidated. Then I can go ahead to perform by business logic. I don't want to hit db to get all these data. Yes you can argue why not just store them in redis too, but with JWT I only need One bit.


"Look, if you just store the user id in an unsigned cookie it's only 6 characters."

Um. Yeah. Also completely insecure.

I know he mentions signed cookies shortly afterwards but the way that part of phrased seriously made me twitch, especially coming from an auth provider's blog.


Couldn’t this just be an argument against session tokens in general? I don’t think I saw anything negative mentioned about JWT that wouldn’t be true for any session token.

I guess you could design a binary-first token format that’s smaller than JWT, that’s true.


The size argument ends up being dominant when storing JWTs in browser cookies because cookie storage size per domain is seriously limited. I know Chrome limits total cookie storage to 4kb, including cookie names.


How to people usually get around file downloads and initial page loads with JWTs (besides signed urls), since you can't control and preempt these requests to also send the JWT?


I learned this the hard way, and a whole host of other lessons trying to ship side projects as fast as possible alongside a full time job.


You learned the wrong lesson.


I couldn't get a clear answer out of the article: is the author for or against JWTs for authentication purposes?


I have used JWTs only for mobile sessions.

Else I opt for the gateway pattern (a service handles authentication and then forwards requests to other microservices).


> In fact, in most web authentication cases, the JWT data is stored in a session cookie anyways, meaning that there are now two levels of signing. One on the cookie itself, and one on the JWT.

This sounds like a bug in someone's implementation of JWTs; most JWTs are not signed twice. It's thus not a valid criticism of JWTs.

> If you’re building a simple website like the ones described above, then your best bet is to stick with boring, simple, and secure server side sessions. Instead of storing a user ID inside of a JWT, then storing a JWT inside of a cookie: just store the user ID directly inside of the cookie and be done with it.

This is conflating two very different things. Sticking with boring, securing server side sessions… sure (though we're eliding how one would actually implement that, but let's assume it's by sending a sufficiently long, unguessable token in a cookie) but then we continue with "just store the user ID directly inside of the cookie" — no! The point of the JWT is the authenticate the user; just sticking a user ID into a cookie doesn't do that; the user could just change the ID in the cookie to whatever, and be done with it. I'd hope the author means "stick the user ID into the session-side storage" (that's associated with the client in some means, likely by an unguessable token as mentioned earlier), but earlier in the article we make the same mistake:

> If we store the ID in a cookie, our total size is 6 bytes

Except, no, you're going to need to store that server-side, and send a token, which will realistically be 16 bytes, not 6. The comparison mostly still stands, since there isn't a significant difference between 6 and 16 bytes.

> For storing a simple user session, that is a ~51x size inflation on every single page request in exchange for cryptographic signing (as well as some header metadata).

It's not a 51x size inflation on every request. That datum is 51x larger, but the request itself includes plenty of other things.

This concern will diminish greatly as HTTP/2 is adopted, I think: HTTP/2 is capable of compressing headers, but in addition to that, multiple requests sending the header can essentially say "let's call this header header #1 and then reference it w/ that number on subsequent requests, greatly reducing the transfer required for common, repetitive headers. If you're on AWS, ALBs support it, so it should be very accessible for most folks w/ browsers. (Client-side programming libraries are another thing, but it'll get there.)

> You’re Going to Hit the Database Anyway

The point is that you can hit the database one time less; yeah, you might need the user object, but you might not. But the suggested alternative of using server-side storage requires a DB lookup in addition to looking up the user object: we've got to first translate the session token in the cookie into a user ID, and then from there get the user object itself. (Perhaps you can, in some cases, JOIN these if you're using a relational DB; but it still bites you if you don't need the user object, and even if you do, the additional disk read isn't going to be faster than checking a signature.

If you're putting just the user ID in the cookie, and signing it, you're just re-inventing JWTs. Perhaps you'll save a few bytes, but you lose out on what are hopefully more robust, feature complete libraries, and a common, familiar format.


tldr; JWTs are a premature optimization (and are often a de-optimization) in the context of user sessions.

That's all the writer is saying, and I think he said it quite well.


Yes. It simply doesn't "solve" a particular "need or problem". It just tries to be a fancy modern alternative to tokens not really solving anything.


"it's going to be 304 bytes (...) instead of 6 bytes!"

O...M...G... I mean... Let me check again what year it is.

Please stop writing bullshit just because you can have a blog for free...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: