Hacker News new | past | comments | ask | show | jobs | submit login
JSON Web Tokens vs. Sessions (float-middle.com)
321 points by darth_mastah on June 18, 2016 | hide | past | favorite | 170 comments



For people using JWT as a substitute for stateful sessions, how do you handle renewal (or revocation)?

With a traditional session, the token is set to expire after some period of inactivity (e.g. one hour). Subsequent requests push out the expiration... so that it's always one hour from the last activity, rather than one hour from initial authentication.

With JWT, the expiration time is baked into the token and seems effectively immutable. To extend the session, you have to either:

1. Re-authenticate from the browser every hour and store a new JWT token, which is kind of an awful user experience, or

2. Renew the JWT token from the server side every hour. At least in Google's implementation, this requires the user to grant "offline access" (another awful user experience)... and you'd need some hacky approach for replacing the JWT token in the user's browser.

So with all the recent discussion about not using JWT for sessions, what do you guys do? Do you simply make users re-authenticate every hour? Is there another magic trick that no one has brought up?

In my own shop, we're using the browser's JWT token as a server-side cache key... and storing the "real" expiration in cache so that it can be extended (or revoked) as needed. I would be interested to know if others take a similar approach, or have issues with that?


For this, you can use refresh tokens and set the JWT expiration to a low interval - say 10 minutes. After every 10 minutes, the JWT expires,authentication fails, and the client uses the refresh token to get a new JWT. To revoke a client, revoke their refresh token. This way, though they won't be logged out immediately, they would be logged out in a max of 10 minutes when they need to refresh again, and find out that their refresh token is no longer valid. The point is that instead of every request touching your DB or cache, only one request does in every 10 minutes.


I know this works, and I've used it, but I also find it to be the most aggravating thing about JWT and also OAuth. With OAuth, some sites allow you to refresh a token after it times out (so really the refresh token is the source of truth, defeating the purpose of the OAuth token), and others only allow you to refresh before it times out (forcing a login by the user if they are disconnected too long, or storing their username/password in the system keychain, making that the source of truth and again defeating the purpose of the OAuth token).

Also a timeout of this kind is only security theater, because it may only take moments to vacuum a client's data once a token has been skimmed.

The JWT library I'm using in Laravel blacklists tokens by storing them in their own table, rather than invalidating them. This is self-evidently a pretty bad vulnerability because malicious users could fill up the database, so now the developer has to deal with that scenario.

Put all that together and I think the notion of token expiration is, well, dumb. In fact I think timeouts of any kind are a code smell. They make otherwise deterministic code very difficult to reason about in the edge cases. The best thing to do is encapsulate timeout handling in a refresh layer of some kind, once again defeating the whole purpose of timeouts in the first place.


I have personally experienced the security disadvantage you mentioned. I used my Google login to sign into an email client. I immediately realised that the app was going to store all of my email data in their private servers. I quickly went over to the Google dashboard and deauthorised the app, relieved that they would only have been able to get my first few mails in this time. But the app retained access to my emails, even receiving new emails for some time. Probably because of something similar to a refresh token being revoked, but the access token still being valid. I wanted to stop the app from accessing my e-mails, but could not.

However, despite this disadvantage some applications just cannot afford the load of every single request touching the DB or cache. JWT makes sense for that particular use case when you are willing to make this compromise. Instead of every single request touching the cache, maybe every 1000th request does now, because of the token expiration time.

Another use case is when you need a very simple, stateless way to authenticate users and don't require revocation. Some Oauth providers don't give you the option to revoke access tokens, for example.


> despite this disadvantage some applications just cannot afford the load of every single request touching the DB or cache.

Disagree. This is one of the simplest things alive to distribute. Split the query from the token do the query in the DB and the token lookup is effectively a distributed hash table lookup (assuming the token is, say, a UUID). Once the DB query comes back store the result pending the successful retrieval of the token.

What's difficult is handling something like millions of concurrent video downloads / uploads - not looking up tiny tokens.


tiny tokens, but needed for every single operation


Sure, but the really hot tokens could be cached right next to the DB. Plus how many operations is a person doing per second? If its more than a couple you can batch them pretty easily.


It's not dumb if you do it right.

Think of it this way, the _real_ token is your refresh token. It's stored in your database. You control it and it can be revoked at any time. So, now you build 100 other services, and they all accept this refresh token. Problem is, since you control it so well, every other service now needs to validate that refresh token with the auth service on every request. It would be really nice if we could get around that massive traffic pinch point. So, we create crypto tokens that can be validated by every service without the need to make a network call. As a compromise, we make this new token expire in an hour so that the _client_ needs to validate their refresh token every hour and all our services are freed of from ever directly calling the auth service. Sure, this means that when you log out you're not really logged out for up to an hour, but it's all tradeoffs.


For many applications, not being able to immediately log out is an unacceptable trade-off. If you know your account has been compromised and you need to kill all sessions ASAP, an hour delay is unacceptable.


> The JWT library I'm using in Laravel blacklists tokens by storing them in their own table, rather than invalidating them.

The main rationale for JWTs is that it removes the session store as a point of contention (and secondarily it resolves some xdomain issues that aren't that difficult to work around anyway). If you're going to introduce a new table/cache, you're likely better off just using sessions.

Totally agree about timeout management.


TTLs aren't necessarily a code smell.

They're important for DNS.


The refresh token doesn't defeat the purpose of oauth. The purpose is that the third party needs to check in again to refresh.

This gives the end user the time to revoke the token at the provider without the need to revoke or even trust the third party.


I've captured this exact workflow, and it works really well in practice on mobile and browser (probably anything that can curl with JSON) https://github.com/hharnisc/auth-service


Ehh... but at least for Google's implementation, you won't have a refresh token in the first place if the user doesn't grant "offline access" on a special page that comes up after login.

In our usability testing, we've found that this freaks a lot of people out and reduces adoption. The benefits of basically outsourcing our session management to Google don't outweigh this... so we use JWT for auth only, and then use that token as a session key for our own local solution.


> the client uses the refresh token to get a new JWT

What's the point of using the JWT then? If the refresh token lasts for longer than the JWT, and you need to send it periodically back up to the server to auth the user, why use the JWT in the first place?


The point is that it would reduce the DB/cache load, as the refresh token would need to be verified once in a few minutes or so as opposed to verifying it for every request. Regular requests could be authenticated in the CPU itself without having to go through to the DB/cache layer. This means lower latency and reduced load on the DB/cache.


...then make the JWT last as long as the refresh token.


If the JWT is as long as the refresh token, then what's the point of having a refresh token? You would then probably need to get new refresh tokens then to make the session last longer.

The idea is to make the refresh token last for say a few days, and the JWT for say 10 minutes. Now, every 10 minutes the client needs to use the refresh token to get a new JWT. The maximum time a client can have access to the service without a valid refresh token is 10 minutes. All the requests made in this window of 10 minutes would be deemed authenticated by verifying the JWT, and without having to go through the database or cache.

Now, say a user of a web app clicks "log me out from all my devices". The user's access needs to be revoked from everywhere they are logged in. If you invalidate all their refresh tokens, then in a max of 10 minutes they would be logged out from everywhere, as their refresh tokens would no longer work and the JWT duration is only 10 minutes.

This approach is essentially a mid-way or a tradeoff between using traditional sessions and JWT. "Pure" JWT is stateless and hence cannot support individual session revocation. The only way to invalidate sessions in "pure" JWT would be to invalidate the key or certificate used to sign the JWT, but that would invalidate everyone else's sessions as well and hence is not very practical.

Since with this approach you implement sessions plus JWT, it's more complicated than just using sessions. JWT should be used for such applications when the latency or load benefit is significant enough to justify the added complexity. For applications that do not need session revocation, however, JWTs are a convenient way to implement sessions without needing a DB or cache layer.


It's so that if the JWT is stolen in transit, the thief only has access to the token for a shorter period of time. This is why they should expire quickly. Whether or not you think that matters, is not up for debate. That's how it is.


jwt can be verified by the application without having to make a call to the auth service


I've been working on rolling my own authentication layer in Node/Express lately. Your comment is the first time I understood how refresh tokens work.


JWT is just a token. It's not some panacea of client-side only authentication. There are a lot of people lamenting the difficulty in performing logout via JWT. I believe people are missing the point. The failing isn't with JWT, it's with the implementation of the session system.

Typically with sessions the client has a session key. The key gets sent to the server where it looks up the session (via. memory, cache, database, whatever). You can create a new session, validate an existing session, or end a session. All using that key. They only difference between JWT and cookies is JWTs aren't automatically sent with every request. You have to explicitly send them. I believe this is a good thing. It avoids some common attack vectors.


Is there anything wrong with saving the token in the cookie? I'm not exactly sure how to save them in the header. I'm guessing save it to localStorage and use javascript to pass it back to the server?


This article talked a bit about putting them in cookies:

  The header method is preferred for security reasons - cookies would be susceptible to CSRF (Cross Site Request Forgery) unless CSRF tokens were used.

  Secondly, the cookies can be sent back only to the same domain (or at most second level domain) they were issued from. If the authentication service resides on a different domain, cookies require much more wild creativeness.
As far as putting something in the header, if you're using javascript check out superagent. It's as easy as:

  request(url).set('SomeHeader', 'SomeValue');
or the latest http fetch api just do:

  var request = new Request('/users.json', {method: 'POST', 
    headers: new Headers({'Content-Type': 'text/plain'})
  });

  fetch(request).then(function() { /* handle response */ });


You can also supply an options object (including headers) as the second argument[1].

[1]https://developer.mozilla.org/en-US/docs/Web/API/GlobalFetch...


"1. Re-authenticate from the browser every hour and store a new JWT token, which is kind of an awful user experience"

The old token can be used to request a new token with an extended expiration, before it timesout. This can easily happen behind the scenes, so it does not affect the user experience at all. The real problem is that you cannot enforce logouts.

If you just accept that you cannot 100% enforce logouts and then it works fine. A logout is performed by the client side code deleting the token. There is no way of knowing that all copies of the token were really deleted. It is imperative that HTTPS is used so that the token cannot be easily stolen.

You could also bind the token to a specific IP but this would fail for devices with dynamic IPs that could change at any time.


Right. You can't logout, and you can't extend the expiration time without the user granting an "offline access" scope during the login process.

So WHY exactly is this better than a simple distributed session store (e.g. Redis or whatever), with a browser header or signed cookie as the cache key? That's the underlying premise of all this recent discussion that eludes me.

It seems an argument could certainly be made that you don't need to use sessions if you don't need sessions. But you kinda DO need sessions if you do need sessions. I think all of these recent blogs hand-wave over that, and much of the recent "JWT instead of sessions" chatter is people just repeating a mantra that they read in blog posts because it seems like the hot new mantra.


I think your issue is that you're trying to wrap Google's auth in your own jwt implementation. This setup can only prove your own trust of the tokens you create.

You can have a user auth with google and provide you a user id, you can then slap that id into a token and know that you gave that token to the right user and the info inside that token is what you put there. It does NOT let you keep other sessions alive on its own. You won't be able to get around Google's expiry this way if you need a valid auth to Google's APIs.

What it does do is let you auth a user through google once at the beginning of your own app's session, and not have to keep track of session in your own database or hit google multiple times in a user session.


Another benefit is providing a standard way to access multiple services at once with a single token. It is possible to force a logout on the server side if a problem arises, and you wouldn't need to store these invalidated tokens in the database because it'd likely be a small blacklist you make available to all the servers (and a blacklisted token would be in that list only until the expiration time).


> access multiple services at once with a single token

If you're talking about multiple microservices under your own control, then they could all work with your distributed session store just as easily (if not more so).

If you're talking about SSO across multiple third-party services, then perhaps JWT could be a nicer solution than SAML. However, here you're talking about a single-digit percentage of edge cases. Not enough to declare "JWT over sessions" as a general rule.


With JWT, you have the option of stateful or stateless. Stateless gives you cheap federation (any server can authenticate a token issued by another server), but you lose the ability to handle revocation without some sort of statefulness introduced (a redis cache with revoked token ids for example). Stateful is basically a non-cookie based session.

One possible alternative to enable auto-renewal is to issue a new token with every request, and manually bake in the persistence of the token into your front end client.

In my own system, I have login with facebook, which submits current FB auth tokens with every request, after which I issue my own app token with all the necessary authorization information for my business logic. Whenever the app token expires, I attempt re-authentication using facebook login, and if successful I send back an updated app token. The front end client has logic built in to compare and swap app tokens if they change and persist in sessionStorage.

It's pretty hacky. I'm a little worried about vulnerabilities that I might be introducing. I luck out in that I don't have a public API which would force the client to implement my front end logic. But it works, for now.


"A Redis cache with revoked token IDs". Default-allow revocation. What's not to love about JWT?


I don't see how else this could work. If we require re-auth e.g. every 10 minutes, then make 10 minutes the token lifetime. If there is one source of "authentication truth" that must be consulted every time, then we don't want tokens anyway, because we have to wait on the truth for every single request.


> storing the "real" expiration in cache so that it can be extended (or revoked) as needed.

You can also have a password_last_changed field on your user model, where any token issued before this date is considered invalid. That was if a user's account somehow gets compromised, all they need to do is change their password and then all of their existing sessions are expired automatically.

I can't think of any good reason for storing the expiration dates of each individual token, although maybe there is a use case somewhere.


Your missing the whole point. If the server was to track password_last_changed it might as well just track user_currently_loggedin.


No, there is a huge difference in write load between those two options.


But it's not RESTful which was the entire purpose of JWT.


Think how you would do that with session cookies - you either issue a fresh cookie or ask for login/password. Same with JWT - your client-server API wrapper can add a renewed token to a response when needed. There are many other reasons to have central API wrapper on the client (common error checking, serialisation, headers etc) so it doesn't cost much to stick an extra check for a new token


In my system the authentication strategy is the responsabilities of the clients. The auth system only provides tokens via the /auth and /refresh_token routes given respectively a usn/pwd or a valid token.

So the client can refresh the token when they are close to expire or just auth again after expiration.


> For people using JWT as a substitute for stateful sessions, how do you handle renewal (or revocation)?

You can do the inverse, e.g. instead of storing 'active sessions', you just store 'revoked sessions'.


yeah, but the problem with that is you're just using the token as a session id.


It's subtly different - you can store all the actual data in the token.


As someone working with it, we use it only for machine to machine communication.

There it is easy to reauth. I don't think jwt was ever ment to used in browsers..


you can do a combination of session and jwt. your firewall manages the session, and the jwt can be used internally and effective decouples your architecture and the firewall (except you will need to send a logout message to the firewall if the user choses to logout)

the jwt is useful as a capability model if you've broken your internal arch into stateless microservices.


I found this: http://www.cse.msu.edu/~alexliu/publications/Cookie/cookie.p... here: http://security.stackexchange.com/questions/7398/secure-sess... And implemented it. It was quick, cheap, and easy enough for me to assign a new token with most (if not every) request that required authorization/authentication. The only bits of info kept in the cookie were insensitive bits of data, so if a single token got cracked it wouldn't be a huge deal.


Re-issuing tokens sounds like a poor-man's nonce. I skimmed the pseudo-code in their paper, immediately said to myself 'uhh..', then a section later they addressed my 'uhh..' by admitting replay attacks are effectively trivial.

In some cases, no security is better than bad security, because at least your users are aware of the insecurity. (Granted, you're protecting against the replay attack - my point still stands for anyone even considering implementing something based on that paper.)


IIRC tptacek has been beating this drum for a while, but it seems that he got tired of it, so I should pick the drum sticks in his stead:

Use less crypto. The less crypto is being used, the fewer mistakes are being made.

When it comes to sessions, generated a secure random 256 bit token and use that as a session id. Store it in a database or in-memory store. Sticky sessions + local session token storage will fix your network latency problems when you start scaling out.

Federation becomes moderately difficult. Perhaps you could store a copy of session id on each node, and when a session id is compromised, proactively reach to all nodes and ask them to purge the session. This allows immediate mitigation for a session id leak, and since it doesn't rely on timeouts there is no vulnerability window for data exfiltratiin upon a breach. And no crypto.


This misses one huge benefit of JWTs: Other parties can trust your token if they were not the ones to sign it.

For example, say client A calls service B, using a token signed by service C. Previously, we were using randomly generated session keys, which meant B had to ask C every time. But with JWTs, B can directly verify that the token is genuine without asking C, because it has C's public key.

We still check with C now and then, but that's because the token auto-expires. We use a long-lived master token stored in a cookie to generate new short-lived ones.


The problem is that you're trying to treat sessions and authorizations as one and the same thing. They're not.

What kind of services are you envisioning? Direct-to-database services like Firebase are a horrible idea for a plethora of security-related reasons, and if you control both service B and C yourself, then you either a) use one-time authorization tokens for stateless services or b) exchange a one-time authorization token for a session on a stateful service.

In none of those three cases do you use the token as the session. Tokens are handed out on a single-use, as-needed basis.


The user does have a session. Tokens are temporary, short-lived authorizations.

Sessions require a central session store; every request has to go to the central session store to check the session's validity. Incurring one session check per API call is bad enough, but when each API call then invokes half a dozen other APIs, you have a problem. Central session stores don't scale with distributed architectures.

I've not heard the term "direct-to-database" before, but we've been doing it for about 6 years, albeit not using Firebase, and it's huge win over the classic "custom, ad-hoc API per use case" methodology. Exposing a shared data layer abstraction between microservices is no different than exposing a more targeted API (i.e. there's conceptually zero difference between "POST /objects/42" and "POST /users/42"), including as far as security is concerned.


There was a solution to that situation proposed here[0] where you keep using session tokens on the user end (so you can still do stuff like revoke sessions), but convert that to a signed token for all internal API calls.

[0] http://cryto.net/%7Ejoepie91/blog/2016/06/19/stop-using-jwt-...


Thanks. That's pretty much the solution we ended up with, though the article author doesn't see the whole picture. Asking a central authority to issue single-use tokens for every call will result in a huge amount of unneeded network traffic.


> Central session stores don't scale with distributed architectures.

Sessions stores scale as much as anything else.


Not what I was referring to. They scale just fine on their own, but the surrounding architecture doesn't when it becomes involved for every single call.

Simple scenario: Let's say user X wants to update document Y, which involves fetching a photo, processing it and storing it as Z. To read Y, we need to check the session. To update Y, we need to check the session. To store the photo, we need to check the session. We're already up to three roundtrips to the session store.

Some interactions require a lot more participants, each of which need to check the store. Over and over again, even though _clearly_ the world hasn't changed the last 300ms. If one API requires 6 calls to its partners, that's 6 times more roundtrips than necessary.


The entire last paragraph of my pevious post describes one way to deal with it, without consulting a central session store.


By previous post, did you the bit about "Federation becomes moderately difficult"?

I don't know if you've ever developed microservices, but building this logic into every single microservice would be a lot of work. We have dozens of microservices, written in different languages, so even if we wrote some generic glue as a library, we'd have to write it at least three times (Go, Ruby and Node.js).


Stop using so many different languages?


I haven't. If you use microservices you should have an API gateway with sticky sessions, there is no need to check permissions for each service. It's more secure that way too, as it uses less code.


It's an interesting idea, but that would mean every internal microservice request (they don't just service users) would have to go through an API gateway, and every call would have to be checked against a session store.

It also means (if I understand the design correctly) that the API gateway has to munge every request envelope to add the actual user ID as an HTTP header (or something similar with gRPC), so that the internal microservices can see it, otherwise they'd have to ask the session store, which would defeat the purpose.

This in turn would mean that every microservice has to trust the upstream, which would require some infrastructure to manage (i.e., microservices would no longer be simply exposed to the world; the API gateway would be a special trusted actor). You also now have a dependency on the API gateway, because the microservice wouldn't know how to deal with a session without it.


The services are behind the firewall, unreachable from the internet, and only accept requests from the gateway, and from each other. The gateway strips your SSL (which I'm sure you are using :)), authenticates the request against its local security token store, then adds a bunch of headers: request id for tracing purposes, user IP for logging purposes, user id for authorizing access to various resources, etc.

The service only accepts the request, it doesn't care where the request came from because it implicitly trust the environment to be protected from direct, unauthenticated calls. Request has the user id in it, so the service knows which data is being accessed.

This way we have separated concerns of security from that of state management. This is what I'm arguing for - security without unnecessary public-facing crypto, and separation of authentication from authorization. The problem public-facing crypto is that there are tons of obscure of attacks on it - manipulating individual bits and sending it to the victim often yields information. New exploits come to light every now and again. I was caught with my pants around my ankles when padding oracle attack against signed cookies came out of the clear blue sky: http://robertheaton.com/2013/07/29/padding-oracle-attack/

Now, to keep the session state you have three options:

- Do the same thing you do with JWT: send the data to the user and accept it back. You know the user has been authenticated already, he can only harm his own resources. When services call each other they pass the session state along with the call, same as JWT again.

- Keep the session data in the centralized session store. Which kid of defeats the purpose of eliminating the centralized session store. :) There are many ways to make that scale well for throughput, e.g. once you know the user id you can trivially shard the session store, but I'm not sure about the latency from accessing yet another machine. This needs to be tested though, perhaps it's negligible. But let's keep that out for now, given the scope of the article.

- Keep the session store at the gateway. When the service replies to the request, part of the payload is the new (additional) session data. The gateway strips that part and keeps it to itself. The next time gateway calls a service for the same user it passes the session data along. The service does not need to know anything about the gateway - it receives a request with user id and session state, then returns the result with new/updated session state. The nice thing is that the users never see the session state (shorter url, less data on the wire, lower chance of malformed data), and yet microservices never have to know where it came from. When services call each other they can attach the state along with it. It may get tricky having to pass the sate back and forth along the call chain of various services, but it's the same problem you would have with JWT anyways. The complexity is why I would circle back to sharded session state storage.

Hopefully this answers all of your questions, let me know if it doesn't? It's an interesting conversation, thanks for that.


You're right about the API gateway being a good pattern for sending external traffic to internal microservices, and it is something you'd want regardless of what session system you used.

I'd let microservices talk directly to each other, however, rather than going through the API gateway. This means you reserve the API gateway for external traffic, and trust the internal traffic. You still need to standardize on/configure a set of "context headers" to pass from one services to another so that the identity of the original caller is preserved; but since this won't rely on a session, it can be trusted and treated opaquely by each application.

My earlier point stands, though: Once you deploy something like this, you now have an explicit dependency on the API gateway, because services can never be directly exposed to the world. It impacts local development, too, because now you either talk using session via the API gateway (which you have to run locally) or directly to the service (using whatever identifying information it trusts), and the fact that they are different is a potential point of confusion, especially for someone not familiar with the architecture. ("It isn't working" — "Oh, you need to go through the API gateway.")

Interesting conversation indeed. I wish there were a better forum for such discussions. There's a lot about microservices where I'd love to exchange ideas and good patterns.


Put it in the API gateway?


You think mistakes can't be made with session ids? Anyways, there are well trusted libraries out there for JWT.


I don't understand. A session key is just a lookup key for a database table. What mistakes are made?


There are tricks there too. If you just lookup in the database, I could find out how much time it took to retrieve the strings, this will leak the session id over time. Databases don't do constant-time data retrieval. You would have to come up with a way to retrieve the session token from the database at constant time, or at least time that does not betray the session id.


But the session id isn't sensitive, is it?

I mean most session verification middleware checks the ip address of the request against the address of the session login. If the address's don't match you kill the session and force another login.

I usually just put the session id in a cookie and use https to encrypt the whole stream shrug


> The less crypto is being used, the fewer mistakes are being made.

I don't understand this position. What do you mean by "fewer mistakes are being made"?


Cryptography-related code is incredibly easy to fuck up, and the tiniest mistake can completely break your security model. A mistake that could be insignificant in other kinds of code, could easily be critical in crypto-related code.

Thus, you reduce both the amount of mistakes and the impact of mistakes by avoiding crypto where possible.

EDIT: Yes, this includes using existing libraries. It's not just the implementation of the primitives themselves where issues occur.


In fact, a heap of JWT libraries had massive vulnerabilities in them not that long ago, that mean you could easily forge JWTs.


Also, worth mentioning this article as an opinion:

http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...

Edit: Title is Stop using JWT for sessions



That article is really good.


That last part where he talks about logging out being the responsibility of the client is rather key. Basically I can't invalidate the key from the server side. So if a user's account is compromised and they recover it on their mobile app for example, I can't sign the user out of everywhere else too. It's what has given me pause about jwt so far and has held me back from using it. I find the cookie is generally good enough to hold most other information I need about the user.


I store a `login_count` in my User model. When searching for a user I `find_by_user_and_login_count`. The token they have has their current login_count in the claims. If I want to logout server side, I just update this value (Can be a random string if you'd rather it not be in increasing integer) and it will invalidate all tokens. When they logout client side, of course we can just clear the token from their memory. If they want to invalidate all tokens, then we have an API endpoint for that and it will logout from all devices.


You can implement sign out everywhere by setting a reauth flag on the user in the database. You lose the "completely stateless" aspect that JWT claims to provide, but it's a small trade-off for tighter security.


Effectively now you are using database as your session storage?


But you're not storing the session there, just the key/token so you can change it. The session payload is still completely in the JSON body maintained by the client and sent with each request.


What difference does that make? Once you commit to this you have to do a database lookup for every request. Why not keep the session data there?


You only have to validate the token. It doesn't have to be a database-type medium because you're not writing very often in fact all you're really doing is making sure the token is not invalid. The session data could be changing on every request which would be at least one write on every request. With this system you are only writing to the central medium on a session creation or a session invalidation.


This is not entirely true. Since we talking about implementing stateful sessions, you could receive a valid token (stolen, out otherwise) after the user has logged out.

You are correct that the lookup doesn't have to be via the database. You could implement a caching system where the cache is invalidated when the user logs out and requires reauthentication. This is the notion of the session. By definition they cannot be stateless.

Stateless authentication is inherently (slightly) less secure than sessions. I think of a blind librarian who gives out keys to the library. Whoever has a key has access. You can put limitations on the timeframe someone has access to the library, but that's it. If your key gets stolen, the blind librarian can't help you as there is no way for him to tell if it's really you.


If the user reauthenticates and you unset the reauth flag, wouldn't their previous sessions (e.g. tokens held by an attacker) suddenly become valid again? How would you prevent such an attack?


You wouldn't use a boolean flag. I suggest setting a validity timestamp for the user, and reject any token that was issued-at any earlier time.

(This isn't a perfect scheme since a compromised issuer could have been induced to send post-dated tokens. If your need for global logout was to invalidate tokens issued by a compromised issuer, you'll need to blacklist keys as well)


Why do you think you can't invalidate a JWT? Store a JWT that is associated with some object in a database that has the field `isInvalidated`. This isn't rocket science. Sure - this turns the JWT token into a session but there is no way to invalidate based on something that isn't determined at creation time without storing something in a database.


What's the point of using a JWT if so? This effectively converts it into a session token


You can create a JWT without writing to a datastore - this is a key difference.


use a ttl on like a cache store, such as redis


Yes, but this is still a deadend.


How so?

It's effectively a micro-optimization that will have no real effect, but you can do a simple "exists" query when searching the revocation list, and the TTL keeps the collection small.

Not advocating for JWT as it's a silly mess to do everything correctly, but it is possible.


Searching a revocation list misses the point of RESTful authentication.


There is no such thing as a RESTful authentication. REST's design hinges on URIs being public.


What do public URIs have to do with it? I don't think we are speaking the same language.


The impact can be mitigated by having low TTLs and using refresh tokens. This will give you a rolling window. If the TTL is 10 minutes and the client doesn't make any requests in 10 minutes, they will be timed out. But if the client continues to have at least one request every 10 minutes the session persist. Session persistence can also be ensured by having your web-client for example make a request to the server every couple of minutes.

Server-side key invalidation is entirely possible but it would require having a blacklist of disabled keys and comparing every requests against the black list. This would obviously concede the benefits of scale from JWT tokens since you are doing the same thing as server side sessions. However, the black list should be considered only as an escape hatch and need not be enabled at all times. In fact, once all the tokens in the black list expire, the black list itself can be disabled and things go back to the way they were.


This is exactly right. Many people confuse this problem with solutions that involve storing state on the server. Doing so completely misses the whole point of RESTful authentication.



That one is from March 2015, and are about some specific libraries that had vulnerabilities at that time.


If you need to validate the Authorization header on every request that's not really different than using session tokens we've been using for the past 15 years. JWT is just a formalized way of managing cookies. Which is nice and I like it, but it doesn't actually enable anything that couldn't be done before albeit with a more ad hoc approach.


Right, Signed Cookies.

JWT doesn't make the claim that it's a new concept, you are assuming as much. It's a standard and as you correctly gleaned and like most other standards, comes with a lot of benefits, best practices, is battle tested and ready-to-use in your favorite frameworks.

It becomes even more useful if you application serves multiple clients such as browsers, iOS applications and so forth because you can hit the ground running without having to reinvent anything.


Both browsers and mobile frameworks can deal with session cookies just fine. JWT doesn't solve anything there.


Your IOS application can use sessions and cookies. There is nothing magic about sessions and cookies.


    [headerB64, payloadB64, signatureB64] = jwt.split('.');

    if (atob(signatureB64) === signatureCreatingFunction(headerB64 + '.' + payloadB64) {  
        // good
    } else
        // no good
    }
You really need a constant time compare for the signature, else you leak information about the correct signature in the timing of the response.


I don't think this is true. Timing attacks are only really useful if you can change a single piece of data and analyze the time difference until you find the right value, over and over again. Like comparing a password.

The unknown data in the signature creating function is a key. The output is a hash. If you were trying to capture the key you would need to guess what the key was to get the correct hash. Each byte you change results in a cascade of changes in the output. You can't get information about the key from a timing attack on a hash function. I think.


This is not about retrieving the key, an attacker can still retrieve the correct opaque signature for specific data. This potentially allows an attacker to impersonate a user.


The signature is already available - it's literally appended to the output. Verifying the signature is done by hashing the body with a secret (key) and comparing it to the known signature.

    "Known" === HMACSHA256(payload, secret)
You already know the left hand side. How would a constant time comparison protect you from a timing attack in this scenario? It wouldn't. The signature is known, the payload is known, but the secret is not known. That means you'd have to attack the secret, which you could even do offline. By iteratively changing the secret, even if you were to get the first half of the hash right, the next character you change would potentially change every single character in the output.

JWT/JWS does not concern itself with keeping the token secret or providing encryption. It's only concern is message authentication. If you leak your token, someone else can impersonate you, just like if you'd leaked your session cookie. The only viable attack on the scheme is to brute force (or discover) the secret, which would allow you to sign a message of your choosing. Once you can sign a message as if you were the server, then you may impersonate anyone you choose.


First, the signature isn't necessary public. As you said, there is a separation of concerns. If the JWT authentication is wrapped underneath encryption, then the signature is private.

In either case, public or private, an attacker shouldn't be able to create their own tokens at their leisure. Token theft is guarded against using encryption, token creation is guarded against using constant-time compare. Encryption cannot protect timing attacks.

Again, the hash secret is irrelevant in the case of trying to generate a token for a specific user. The signature for a specific user payload is an opaque hash that can be discovered via timing analysis. The secret hash function that was used to produce it is not important if timing information is available.

If you cannot understand this, please I beg you, do NOT design crypto systems. If you'd like to be an even better Good Samaritan, please disclose your identity to the community. This may protect your future employer and their customers from a breach.


> If you cannot understand this, please I beg you, do NOT design crypto systems. If you'd like to be an even better Good Samaritan, please disclose your identity to the community. This may protect your future employer and their customers from a breach.

First let me say that I believed we were having a decent conversation until this comment right here. You're personally attacking me at this point. I am not designing crypto. In my first message I stated multiple times that those were my thoughts. My identity isn't hidden - if you care to look through my history. Your account is 20 days old, and you're talking to me about making my identity public?

Anyhow, I now understand the attack you've been describing. I misunderstood the attack, thinking you wanted to be able to sign any message of your choosing. Rather, the attack is focused on iteratively changing the signature of a message until the server confirms it. I was wrong, you were right. The spec for JSON Web Algorithms (JWA)[0] confirms this.

[0] https://tools.ietf.org/html/draft-ietf-jose-json-web-algorit...


It wasn't a personal attack. Computer security is a minefield of unknown unknowns. Individuals that aren't deeply familiar with all the modern attacks or aren't skeptical of unknown attacks shouldn't design crypto, has nothing to do with you. Some people need a document to tell them what the secure thing to do is, security analysts don't. I would beg anyone who claims that timing attacks against signatures don't exist to not build crypto systems. In the same way I would beg anyone who denies climate change to not make public policy, wouldn't you?


It wasn't the "don't build security" that was the attack. It was "be a good boy and tell us who you are so we're protected from your incompetence" that I'm taking issue with.

I think it's particularly unfair seeing as your original reply to me was so brief that I missed the crucial point. Anyhow, we're done here.


Asking you to disclose your identity was only a suggestion and it was offered at your discretion, appealing to your potential desire to be a Good Samaritan.

My sincere intent was not to offend you but protect you from yourself. I had already offered what I thought was sufficient explanation for the attack and I felt there were no further options.

I did not mean to offend you but since I have I apologize. I don't believe one's self-worth or employability is based on their expertise in computer security but I can see how that can be implied by what I wrote and I was wrong to be careless with my language online. Honestly I am no expert either. I just hope we can all cooperate, put our egos aside, remain humble in our limited knowledge, keep an open mind and stay focused on the facts to build more secure systems.

Peace and well wishes to you and your family.


A similar approach are encrypted cookies. You can take data and sign it. The server can then check if the cookie data is correctly signed and accepts or rejects the data in the cookie. This approach also scales horizontally. I use it for a while now in go (http://www.gorillatoolkit.org/pkg/sessions). If you want you can also encrypt the expiry date into the secured cookie.


Yes, I've been using encrypted cookies for years. If the user wants to use my website - they can store their own session data.

The trade-off is extra bytes over the wire as the cookie is sent each same-domain request.


Not if you use HTTP2


Aren't cookies the safest approach for storing authorization tokens? I recently found out that both Google and Facebook use cookies for authorization, so it seems like the way to go, though I've read that it gives programmer headaches.


Cookies can be/are used for extensive tracking (even when you're not techically on the site). The EU requires websites that use cookies to display conformation message on how cookies will be used to track the user.


The infamous "cookie" law isn't limited to Cookies. JWT, local storage and co all fall under the same law that will force you to put a disclaimer on your webpages. There is absolutely no difference between the 2 in that regard. The fact that you think otherwise is a serious legal risk for your business if you have one.


the EU requires websites that use _any_ kind of local storage to display a conformation message on how the storage will be used to track the user. It is not specific to cookies.


This is the 3rd or 4th article in the last 3 weeks on JWT. Each has argued that JWT is either secure or totally useless. What is the deal?


The two are not mutually exclusive. Either way, my article (http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...) argues neither.


This is a practical implementation of something I described on stackoverflow in 2013. http://stackoverflow.com/questions/319530/restful-authentica...

The discussion there is rather interesting. The problem of invalidating logins is discussed. I have not found any satisfactory solution to this problem. You can set a timeout on tokens but then the user would have to log back in periodically. If the software can renew the token automatically then there is nothing to stop an attacker with a stolen token from doing the same, indefinitely. Still, in many situations these problems are no worse than compromised session based logins.


Serious question: are there any opinionated frameworks that use JWT for session-like or session-replacing ways?

There was one was linked in this thread [1], but how did framework authors tackle this domain? Is everything discussed here entirely homegrown, ad-hoc? Is this something that every one of us has to read up on and re-implement every time?

[1] https://github.com/hharnisc/auth-service


Reminds me of "macaroons".

http://research.google.com/pubs/pub41892.html "Macaroons: Cookies with Contextual Caveats for Decentralized Authorization in the Cloud"


It's unfortunate that the JWT encoding scheme of the signed data (non-normative JSON -> base64 -> concatenate with a dot) does not lend itself well to hash-chaining. JWTs could have been a great standard encoding for macaroons.

(disclaimer: I'm an author of that paper)


Solid point... I suspect many of the shortcuts in JWT (e.g., not making any attempt to normalize / canonicalize before signing) are a backlash against the implementation headaches of SAML, specifically XML-DSig and the troubles associated with "tell me the ID of the signed element and I'll verify it's signed" mode operation.


I didn't understand this part:

"if your application maintained a list of key/algorithm pairs, and each of the pairs had a name (id), you could add that key id to the header and then during verification of the JWT you would have more confidence in picking the algorithm"

This implies there's a security benefit, but I don't understand how it's better than checking the alg parameter against a whitelist. Perhaps if you're using non-standard names for algorithms, that guards against mistakes?


Why the obsession with 3 letter short names? Why "typ" and not "type". I'm sore the overhead can be ignored and the parser doesn't care.


The JWT has to fit inside HTTP headers, which means it's not unlimited in size. The default header size limit varies by web server, but once you get above 8k it becomes a game of "which reverse proxy is choking on these headers this time?".

It's compounded by the fact that a lot of web servers (ie. nginx) have a global header size that limits all of your headers together, not just any one header, which means your JWT size limit is nondeterministic, especially if have large-ish cookies in your request.

There's standard ways to include the JWT in the request body, like form encoding, but that doesn't work for GET requests, so in practice everyone uses the Authorization header.


> There's standard ways to include the JWT in the request body, like form encoding, but that doesn't work for GET requests, so in practice everyone uses the Authorization header.

You can include a JWT as an access_token URL parameter, which works in a GET:

    https://foo.invalid/bar?access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ


> You can include a JWT as an access_token URL parameter, which works in a GET:

Only works with IE > 11, and even IE 11 breaks down sometimes. Also, search engines choke on large URLs, see http://stackoverflow.com/a/417184/1933738


Ummm, every browser for decades has supported URL parameters. I can't believe that IE breaks with them.

Yes, large URLs are an issue, but not with search engines & access tokens, since search engines shouldn't ever see access tokens. A JWT should be lightweight — if it's big, then it's wrong.


> but not with search engines & access tokens, since search engines shouldn't ever see access tokens

It's like with the PHPSESSID URL parameter before cookie support was widespread. You can very well attach a session to a search-engine bot, and there are explicit hints for appdevs to do so.

You have to pass the tokens somehow, and search engines usually don't run JS.

> A JWT should be lightweight — if it's big, then it's wrong.

A JWT should replace sessions, and I have seen megabyte-sized sesion files on servers. People put an awful lot of stuff into sessions. Especially if application state is contained in the session. Oh, and if your application e.g. stores paths to files in the session and you switch 1:1 to JWT, you will leak server information to the client, which is a security hole.


There are length restrictions for get parameters in older browsers.


Up to 2000 characters is generally considered fine.


And then your non tech savvy mother sends you a link to this cool website, and you're logged in with her credentials..


Could this be an issue if you try passing the refresh token as well? Is there some limit for the length of a GET request parameters?


There are practical limits, but lightweight JWTs won't run up against them. Heavyweight ones would, of course.


...and then it goes and uses JSON, which is definitely not the most compact of serialisation formats. The whole thing is base64'd anyway, so something like ASN.1 PER which is a "true" binary format would be far better than base64'ing what is essentially text. But then, in my experience, Web standards tend to be weird like that.


I recently added support for reading/writing localStorage variables in intercooler.js to support things like this:

https://github.com/LeadDyno/intercooler-js/commit/e83a1ff76a...


So, I've started working on a new project recently, and thought I'd use JWT (instead of Ye Olde cookie-based sessions). After working with it for a bit, I've decided that I can probably get away with using it for authentication, but definitely NOT for authorization.

I'm using a kind of RBAC and storing Roles in the JWT just seems like a bad idea. Header size is one issue, but also there is the problem of granting/revoking a role and having that change reflected immediately (rather than waiting on the token refresh window).

So, now my API requests happen thusly: "Ok, I have a valid token for User X, so I accept that this request came from User X. Now, let's check User X's roles in the database to see if they have permission to perform action Y on resource Z..."

Hmm... I'm not sure this feels right.


The thing that scares me about using JWT is that all security completely relies on the one secret that is used to sign tokens - any person with access to that secret has got potentially unlimited access to the app. They can now impersonate any user and do basically anything.


Yes, it'd be better if JWTs were full-fledged certificates, where ultimate authority could be confined to some offline key, who delegates authority for strictly delimited period to online keys. Or ultimate authority could belong to k out of a collection of n keys: one would need to suborn k keys to suborn the authority as a whole.

RFCs 2692 & 2693 specify a really great, lightweight, way to do that. They resulting certificates needn't be a whole lot heavier than a JWT, and are much lighter-weight than and X.509 certificate. The RFCs also specify an intelligent, domain-neutral algorithm for performing calculations on tags (what JWT refers to as 'claims') and keys.

It's a pretty awesome solution, and there are a lot of good ideas in there. A streamlined version could, I think, end up being competitive with JWTs.


SPKI was deprecated for SDSI[1] (also done by Rivest), both of which AFAIK haven't been touched in ~20 years (which is fine by me, if the theory and implementation are solid, but SDSI has CORBA/J2EE smells all over the RFC from what I remember. Lightweight, eh...)

[1] https://people.csail.mit.edu/rivest/sdsi11.html


> SPKI was deprecated for SDSI

No, it's the other way around: SDSI was deprecated for SPKI, which took a lot of its ideas about naming from SDSI.

> both of which AFAIK haven't been touched in ~20 years (which is fine by me, if the theory and implementation are solid, but SDSI has CORBA/J2EE smells all over the RFC from what I remember. Lightweight, eh...)

SPKI is indeed old, but the fundamental ideas are really good, and some of them (the cert calculus) are timeless. It needs a v2.0 to update the recommended crypto, specify some more use cases and so forth. But it's really, _really_ good, far better than XPKI and extremely capable.

And still pretty lightweight.


Hey, if the conceptual grounds are sound, which I'm guessing they are, since... I mean, Ron Rivest, age doesn't quite matter w/r/t the timeless elements. Rijndael is mathematically sound, and honestly I've got more trust in older algorithms than newer ones if only because there's been more time for the populace to vet it[1] presumably fortifying it with time.

All of the resources I've searched for are fairly old, do you have anything more recent that I can read up on? I see a 2006 paper, but not much other than that.

[1] Though I'm well aware that having an open-standard available for a long time doesn't mean squat, as evidenced by Heartbleed-esque bugs.

Edit: Reading the '00 "A Formal Semantics for SPKI" Howell, Katz, Dartmouth. This is what I was looking for.


I wonder what you think so far.

In particular, I liked the tuple-calculus they define; it can be extended to support just about any permission I can think of (although it does require reversing DNS names, which is slightly ugly).

I have a scheme to release a v2.0 of the standard someday in my copious free time.


Sometimes it's good to use asymmetric crypto, and those RFCs could perhaps be suitable for that. My impression of the parent comment is not that of a preference for asymmetric over symmetric, however, but rather of a lament that key-based crypto is defeated when attackers learn keys. After all the complaint about secret keys applies just as well to private keys.


The thing that scares me...

You should rotate your keys often enough to alleviate your fears. You probably ought to trust your MAC a lot more than you trust other links in the key-security chain, anyway.


I've had this "them: we should use JWT, it's easier to scale; me: the customer has 200 clients, they don't need to scale shit, sessions are fine" argument before, and honestly at this point I'm pretty convinced JWT isn't a replacement for session auth.

The one place I can see it might be useful, is when you need to give a third-party system some kind of access to your API, as an alternative to storing a secondary hashed field(s) for api keys.


A few years later : "JWT in place of sessions considered harmful".


JWT is especially useful for validating requests in a microservice architecture. You can pass around the token an embed roles in them. No need to keep a session store with them!


On another note I've been working on a service that generates expirable/refreshable JWTs. Its a good way to start trying them out https://github.com/hharnisc/auth-service


Better remove that again. It is extremely dangerous to store any kind of credential in Local Storage. Cookies are the (only) correct place for storing credentials.


Because you can set Secure and HttpOnly flags on cookies? This merely brings them up to the same level of security you get with Local Storage. http://blog.portswigger.net/2016/05/web-storage-lesser-evil-...


What is a "level" of security? If I'm able to inject arbitrary code into your page, with it I can access your local storage data, but I can't access your "http only" cookies - so there's at least some "level" of difference.


If you're at the point where someone can inject random code into your site, you've already lost and have so many more problems than access to localStorage.


Correct me if I'm wrong but the benefits sound very similar to a good old fashion cookie except that you're not limited by 4kb.


This is the common misconception. Think of a cookie as a storage 'bucket' on a user's device. It can store up to 4KB in bytes of, well, anything (a string). JWT is format. That's it. It is enciphered and has the ability to store JSON, but otherwise, it's just a format or scheme that allows you to organize how you store your session (or whatever) data. As an example, in my usage I actually store my JWT in a user cookie and all it keeps is the expiration, user id and a few other tidbits I don't want to hit a database for but am comfortable exposing in case the token is cracked.


I agree that's how we treat them with Feathers. You may know this already but JWT's are intended to be decrypted on the client so you shouldn't be be saying "if" it is cracked, more "when". The signature is only good for ensuring that the content hasn't been manipulated. Not that you are, but for others, never store anything inside a JWT that is sensitive, and if it is make sure you encrypt it first before you put it in the JWT payload.


> but am comfortable exposing in case the token is cracked.

If the token is cracked, it is essentially the account compromise. You probably meant the base64 decode if the token is 'leaked'?


Right, but then the browser is automatically doing the job for you (saving the cookie and then returning in the next request). That would be so bad! We can't do custom code to acheive the same thing...


The assertion that cookies require server state is wrong. A JWT would easily fit inside a cookie. Or a URL for that matter.


How does this avoid the problem of a 3rd party getting your jwt token? Then they can do any request as you.


Tying the token to an IP address will largely ameliorate that issue. As always, there is a tradeoff of security and convenience. I've worked at facilities with no internet access, where hard drives were removed and put in safes at the end of the day, and that had "leper lights" which flashed when unsecure people like me were present. Very secure but hardly convenient. You always need to ask yourself what it is that you are securing.


You should not tie any functionality, user experience, and last but certainly not least, security to an IP address. IP addresses can be spoofed rather easily, using public wifi, hotel/guest networks, and even certain ISPs means the rotating of IP addresses on each user request.


Adding an IP lock to a token can only increase security - it cannot decrease security. And if you wish to be secure, then you wouldn't use public wifi.


What you're doing is increasing complexity, even if marginal, for no benefit and to the inconvenience of your users; Sometimes, increased complexity does introduce insecurities, but I digress. The developer has little to no control over what type of network (wi-fi, public hotspot, hotel, corporate, private, etc..) a user accesses the site with and the user couldn't care less how sessions are managed (for the most part), all they know is if their token becomes invalid on each request because they are using a network that pools and rotates IP addresses b/w users/requests, forcing them to re-authenticate with each request, well, you're going to have some very unhappy users.


Most enterprise application users who are working are unhappy users. I'm not trying to fix that problem.


jwt are the capability model. They are not forgeable and they expire. You can revoke them by telling the database to ignore them by the token id. If you have a services based model you can pass them along and they can verify your capability without going to the db.


Functionally, sounds a lot like http basic authentication, only more complicated.


XHR? Sessions.

REST? Bearer tokens or OAuth.

Third party trust? JWT.


People don't realize it is not a proper comparison as JWT is only the format/spec - you can still achieve stateless client session by encrypting an XML payload (e.g. user id) in the browser cookie. Storing data in client and verify by signature is not a new thing.


Ah, Kerberos :-)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: