Hacker News new | comments | show | ask | jobs | submit login
JSON Web Tokens should be avoided (paragonie.com)
386 points by CiPHPerCoder 253 days ago | hide | past | web | 297 comments | favorite



The criticisms of JWT seem to fall into two categories:

(1) Criticizing vulnerabilities in particular JWT libraries, as in this article.

(2) Generally criticizing the practice of using any "stateless" client tokens. Because there's no great way to revoke them early while remaining stateless, etc.

The problem is that both of these groups only criticize, neither of them can ever seem to actually recommend any alternatives.

I could care less about JWT per se. I'm happy to implement a similar pattern with something else (e.g. store a secure cookie post-auth, skip all the refresh business and just let it expire when it expires, and employ an ugly revocation strategy only if absolutely necessary). I don't need JWT for this.

If I'm providing a REST API, then I'd prefer a token string that I could pass as a header value rather than forcing the use of cookies. Although I suppose you could argue that a cookie is just another header value.

Either way, if you're serving up a REST API to a JavaScript UI... what's NOT a good option is server-side session state (e.g. Java servlet sessions). That requires you to either: configure your load balancer for sticky-sessions, or employ a solution to share session state across all your server-side instances (which never works very reliably). Moreover, relying on a session isn't a very RESTful auth strategy in the first place.

So if I'm writing a SPA in 2017, then I'm definitely taking a client-side approach and running afoul of the #2 critics. And since JWT is so widely implemented (e.g. if I use a "Login with Google" option then I'm using JWT), I'm probably running afoul of the #1 critics too.

These criticism are fine, I guess. There's no faster route to blog clicks, book sales, speaker invites, and consulting dollars than: (1) telling everyone to jump on this year's hype train, or (2) telling everyone that last year's hype train sucks. What the world really needs is a bit more actual prescriptive recommendations of what to do instead.


I don't care if you want to use stateless client tokens. They're fine. You should understand the operational limitations (they may keep you up late on a Friday scrambling to deploy a token blacklist), but, we're all adults here, and you can make your own decisions about that.

The issue with JWT in particular is that it doesn't bring anything to the table, but comes with a whole lot of terrifying complexity. Worse, you as a developer won't see that complexity: JWT looks like a simple token with a magic cryptographically-protected bag-of-attributes interface. The problems are all behind the scenes.

For most applications, the technical problems JWT solves are not especially complicated. Parseable bearer tokens are something Rails has been able to generate for close to a decade using ActiveSupport::MessageEncryptor. AS::ME is substantially safer than JWT, but people are swapping it out of applications in favor of JWT.

Someone needs to write the blog post about how to provide bag-of-attributes secure bearer tokens in all the major programming environments. Someone else needs to get to work standardizing one of those formats as an alternative to JWT so that there's a simple answer to "if not JWT then what?" that rebuts the (I think sort of silly) presumption that whatever an app uses needs to be RFC standardized.

But there's a reason crypto people hate the JWT/JOSE/JWE standards. You should avoid them. They're in the news again because someone noticed that one of the public key constructions (ECDHE-ES) is terribly insecure. I think it's literally the case that no cryptographer bothered to point this out before because they all assumed people knew JWT was a tire fire.


>that rebuts the (I think sort of silly) presumption that whatever an app uses needs to be RFC standardized.

I thought crypto mantra was "Never roll your own." An RFC (Request For Comments) is a literal attempt to follow that advice by seeking the advice of cryptographers who are presumably smarter at coming up with crypto standards. Where were the cryptographers during the draft phase when comments were being solicited?

>I think it's literally the case that no cryptographer bothered to point this out before because they all assumed people knew JWT was a tire fire.

Oh. Cunningham's Law. You know, if you're not part of the solution, you're part of the precipitate.

That's some considerable JWT fallout, since companies making a business out of security are endorsing it. Auth0 for example, https://auth0.com/docs/jwt

Until I read this article, I was under the impression JWT was the best new thing.


I don't care about these moral arguments. I'm making a simple, positive claim: JWT is bad. You can blame whoever you'd like for it being bad, but as engineers, you need to understand first and foremost that JWT is bad, and reckon with your feelings about that later.

You have a responsibility to built trustworthy systems, and you get no pass on building with flawed components simply because you wish experts had made those components less flawed.


"Contribute to standards processes" and "don't roll your own" aren't moral arguments. They're complimentary pieces of practical advice on how to make trustworthy systems.

Meanwhile your comments bury whatever substantive content they might hold under layers of emotional, accusatory garbage. Maybe get those feelings locked down a bit before posting?


I have no feelings on the subject. I don't use JWT. I just want to point out this sounds like (and continues to sound like) "Roll your own" advice to me.


Do you have any recommendations for SPAs where the API is hosted on a different subdomain than www? I think everyone agrees that JWT is a bad spec, the problem is that setting cookies across subdomains ranges from difficult to impossible.

If you have access to an experienced devops team who can securely maintain an nginx server with some proxy logic then maybe that's a possibility, but otherwise what other viable options are there? Wishing that JWT were more secure won't make it so, but neither will wishing that CORS were more flexible. And if it's a choice between subclassing the JWT handlers to provide a couple extra security checks vs trying to securely configure and maintain a whole extra proxy setup, then the former seems like the lesser of the evils.


Does this mean that using AWS Cognito [1] is out of the question since it uses JWT? Unfortunately, you can't change what the service uses as it's all under Amazon's control.

[1]: https://docs.aws.amazon.com/cognito/latest/developerguide/am...


> they may keep you up late on a Friday scrambling to deploy a token blacklist

Because every token has an iat datetime, you don't need a token blacklist to invalidate tokens. You just need some sort of tokens_invalid_if_issued_before_datetime setting that gets checked whenever you validate the signature of a token.

The alternative is to store a UUID for each user, and just rotate those whenever they log out, change or reset their password, or there is some sort of security event. These are then stored in the payload and used as a secret. The one advantage over just using dates is that with the former, there can be weird bugs if you have multiple servers with clocks that are out of sync.

But you shouldn't ever need to blacklist specific tokens, at least not unless you have some highly specialized use case.


Agreed. Revoking all user sessions instead of a specific token is the common case. The only usage I see for revoking a specific token is when the user is deactivating a specific client.


Also when a user changes his password, no ?


> The alternative is to store a UUID for each user

Is that not effectively a server-side session?


> Is that not effectively a server-side session?

With most web frameworks (e.g. Django), the user model is retrieved on every request anyway. So it would be perhaps more accurate to say that it's a server-side session that's effectively not a server-side session, since no additional lookups are needed, only the user model lookup that's already done anyway.


So every user on your system has to reauthenticate if one client token is compromised? That seems like an invitation to a thundering herd. Not necessarily fatal, but I'd consider it a nice feature to not have to invalidate everybody's tokens to get at one.


> So every user on your system has to reauthenticate if one client token is compromised?

No, because you would also store either a separate datetime or uuid on each user model. And if just one user has their credentials compromised, then you would bump the date or generate a new UUID for just that user.

The global datetime would only be bumped if some site wide vulnerability were found.


So there is a db roundtrip involved? Like a inverted session. Whats the point of using jwt then?


For most web frameworks, the user model gets retrieved from the db automatically whenever an authenticated request is made. So there is no extra lookup.


Oh man.

Proponents say critics dont offer alternatives at the same time they always literally 'reverse' engineered sessions if you dig deep enough.

I give up. JWT is just a hip thing to do right now. :(


Got it, didn't catch that you were referring to storing that timestamp per-user.

That's what I do in my system.


I appreciate that this comment is wise from a cryptosystems perspective, i.e. there are a number of ways to do JWT wrong, not enough safety guards, etc., but is there not a subset of JWT that is safe to use?

The OP article makes it sound like it's impossible to use JWT correctly, but I was under the impression that if I 1) am the issuer, and 2) I hardcode a single algorithm on my API endpoints, that neither of the issues in the OP apply. (The EC issue would apply if that algorithm was chosen).

Is there a safe subset of JWT? And isn't there value to small players in using the safe subset of JWT which is battle-hardened by guys with big security teams like Google?


It's not that need an RFC standardized solution for everything, but I'd rather not roll my own anything related to crypto. Would something like crypto_auth(json(bag)) be better here? (crypto_auth from libsodium, json being sorted without whitespace)


Yes, that would be much better, and it's what I mean when I say that JWT doesn't bring anything to the table.


"json being sorted without whitespace"

What is the significance of that part?


It makes JSON deterministic, which it isn't by default (e.g. {"foo": 1, "bar": 2} and {"bar":2,"foo":1} are both valid serialisations.

Of course, it'd be better still to use a format _meant_ to provide human-readable canonical representations of data, e.g. Ron Rivest's canonical S-expressions (http://people.csail.mit.edu/rivest/Sexp.txt), but of course this is information technology and we have to reinvent the wheel — usually as an irregular polygon — every 3-4 years rather than using techniques which are tried and true.


Ah yes, similar to canonicalization of XML for XMLSignature?

Presumably this means that you have to have have a "flat" JSON structure rather than lots of nested objects and arrays?


Afaik you just need to alphabetize the properties of every object


Assuming that:

- your JWT libraries don't do anything dumb like accepting the `none` algorithm

- you're using HMAC SHA-256

- your access tokens have a short (~20 min) expiration time

- your refresh tokens are easily revocable

Can you elaborate on the specific security advantages that a token encoded with ActiveSupport::MessageEncryptor would have over such a JWT implementation?

Why do you think there aren't more AS::ME implementations out there if it's a superior solution? I only know of a Go implementation and haven't seen others: https://godoc.org/github.com/mattetti/goRailsYourself/crypto

Edit

I saw you mention Fernet in another comment. As a Heroku alum I'm quite familiar with Fernet (we used it for lots of things), but to my knowledge those projects are on life support at best.


You should also make sure to allow only tokens with the "HS256" alg headers before you verify them, in case somebody decides to add a new signature algorithm to your library, and it turns out it could easily be broken and lets you use the same key you used for HS256.


If it's your software generating the tokens, then that means they'd need the shared key, or private key in order to sign the token... which is already a problem. Now if you're accepting tokens from a third party, that's another issue, and should be much more constrained.

I go farther still and require a VERY short expiration on service to service requests (documented as 1m, coded as 2m) which combined with https limits the chance of replay attacks.


yes, that's another good point and probably something many folks mess up. I am explicitly specifying my algorithm for both encode/decode :)


I'd also be interested in hearing an answer to this from tptacek.

My (limited) understanding is the security issues arise around the implementation & handling some of the default claims (NBF, IAT, etc.) and producing/verifying the signature.

But I don't quite understand how moving to a different format solves these issues?


> using HMAC SHA-256

HMAC is great for monolithic architecture, but I've quite enjoyed using asymmetric RS256. I don't think that's something AS::ME offers.


> there's a reason crypto people hate the JWT/JOSE/JWE standards. You should avoid them

Could you give more info about this? If ECDHE-ES is avoided why else is JWT insecure?



Seems like, practically, that suggests three options:

1. Take something like AS::ME that already has real use and implement it for as many platforms as possible

2. Define a really restricted subset of JWT (which may be necessary anyway for purposes of saying to management "yes, we're buzzword compliant")

3. Invent a non-AS::ME "bag-of-attributes secure bearer token" system and implement it everywhere.

I think part of the trouble with 3 is that people like me genuinely worry that if we tried to roll our own we'd manage to do worse than JWT in spite of JWT being terrible.

So maybe step zero is for somebody with crypto knowledge to explain one sane way to do the "bag-of-attributes secure bearer token" part ... or you to point the audience to a blog post that already exists that describes it, because, well, because I suspect quite a few of us trust you to say "this post actually describes a sensible plan" while we don't trust ourselves to be able to tell.


Or just use Fernet:

https://github.com/fernet/spec/blob/master/Spec.md

Fernet was written originally for Python but there's a Ruby implementation, a Golang implementation, and a Clojure implementation. I believe that for at least 80% of applications considering JWT, Fernet provides exactly the right amount of functionality, and does so far more safely than JWT.


Since it's not linked from any https://github.com/fernet/ project as far as I could tell, and I had to google for it.

Clojure implementation:

https://github.com/derwolfe/fernet-clj

I also found a JavaScript implementation:

https://github.com/csquared/fernet.js


This looks very much like the approach to session-data-in-encrypted-and-signed-cookie I've seen used to great success in lots of places (where for a stateless-ish API the contents are just a user id or whatever).

Am I right that this would work fine both in that or in e.g. a query parameter?

(sorry if I'm asking really stupid questions, but I'd rather look stupid than accidentally a security hole)


1. The reason AS::ME can be that nice is because it assumes a monolithic architecture and a single framework.

For example, AS::ME relies on shared secrets, which I think makes it unfit for distributed systems. Implementing JWK with asymmetric keys can really reduce provisioning and configuration costs. Keeping the signing secret on one private, hardened auth server (or cluster) also allows smart things like automated key rotation.

2. 100%. There's at least one right way to do JWT, but more ways to do JWT wrong.

3. JWT et al provide a fine starting point, I don't see a reason to start from scratch.

I'm not tied to the JWT spec, but I'm quite happy with what I've been able to accomplish using a careful implementation in my AuthN server: https://github.com/keratin/authn


Agreed... my first two experiences with JWT were creating my own implementation... in my case, the allowed public keys had to come via https from a specific server in the domain, even without PKI using shared key... I had hard coded the algorithm used for the signature. This could just as easily be filters on a library though, it's just my first experience didn't have a valid library, so I had to composite one (did use existing crypto library though).

JWT is a perfectly valid structure, even if the spec is more flexible than it should be. By that matter, https also has historically supported algorithms and protocols later broken. Nobody is suggesting we stop use HTTPS, only that we limit acceptable protocol and algorithms supported.


No, almost everybody in the field laments SSL and TLS. It's probably too late at this point --- and has been for well over a decade --- to get to something better than TLS, and so TLS 1.3 is what we're stuck with. But that is demonstrably not the case with JWT. We don't have to convince all the browser vendor to upgrade out of JWT in lockstep. Avoiding another 20 years of hair-on-fire crypto vulnerabilities seems reason enough to lobby against that spec.


But any given algorithm today may not be sufficient tomorrow... so we just don't use ANY encryption? JWT is a perfectly valid structure.. there are options as to signing, so use/limit as needed.


And I think JWT is more flawed than SSL/TLS.


For my service to service requests, I tend to require the token itself be set to an expiration of less than 1 minute from creation. I actually code 2min in the check, but document 1 for access clients. This allows for more than enough drift and with https mitigates the level of risk for replay attacks.

Beyond this a header/signature for the body/payload will reduce the risk of the rest.

As to being able to select the signature algorithm, or set the uri for the public key... ignore this, or whitelist domains or methods. Yes, there's some wholes regarding a "by the books" implementation... that doesn't mean you need to support the entire spec.

I implemented about 1/2 the SCORM spec in an API once, and it was 8 years before a specific course needed a part that was missing. Yes, it isn't 100% compliant, but if it does the job, and is more secure as a result, then I'm in favor of it.


It seems that Matthew Green warned them about some of their choices though back in 2012!! https://www.ietf.org/mail-archive/web/jose/current/msg00366....


Either way, if you're serving up a REST API to a JavaScript UI... what's NOT a good option is server-side session state (e.g. Java servlet sessions)

Can you explain what you mean, as oppose to other kinds of session tokens?

Roy Fielding makes it abundantly clear[0] in the seminal delineation of REST that

  We next add a constraint to the client-server interaction: communication must be stateless in nature, as in the client-stateless-server (CSS) style of Section 3.4.3 (Figure 5-3), such that each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.
This has given me pause to doubt just how many people are really implementing REST, and/or how useful a model it is in modern web applications.

[0]https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...


> This has given me pause to doubt just how many people are really implementing REST, and/or how useful a model it is in modern web applications.

Not many. I've made a good career out of consulting people who are doing REST wrong :)

Usually they are either storing state or forgetting about HTAEOAS (Hypermedia). Often they are also negotiated format and version incorrectly and in a way that doesn't scale.


I'm going to contend that a "correct" implementation of REST has never existed


IMO sessions for authentication is fine as long as we don't store any session variables. In the end someone's going to be keeping track of the number of GET requests made by each API consumer on each endpoint, and practically it's not breaking REST as long as that state doesn't affect the information GETable by the client.


I just can't reconcile the fact that if I hit an endpoint, and I get back certain data with a 2XX response code because I previously accessed a "login" resource, but I would have gotten a 4XX response code if I had not gone to that prior "login" resource that I haven't violated REST: my request for the second endpoint takes advantage of stored context on the server. Even worse, if I restart the server, change out its database, or make some other stateful change on it, the response changes. And I don't mean to imply that this is a universal sin of computing architecture, but it sure as hell looks like a violation of REST, and it begs the question of what our standard is. Headers, path-based nomenclature, network layering, authentication - I'm on board with all of that. I just worry that a lot of people might be falling victim to commonplace misconceptions of just what REST is, which in turn may be causing us (the web dev community) to make misplaced value judgments.


Rest doesn't mean no state in the world exists. It's not a violation at all that an endpoint changes its output. Rest only reasons about idempotency, not reproducibility.


> Rest only reasons about idempotency, not reproducibility.

Absolutely. If you GET a collection resource, then POST a new item into the collection, then GET the collection again, the response will have changed. Having these kind of temporal dependencies on the answer you receive is not something REST argues against.


None of that violates REST; REST is not statelessness. In fact, REST (REpresentational State Transfer) is all about state and how it is changed and how those changes are manifested.


So, does this include web tokens too, or is auth a special case?


On another note.

Dear Americans, the queen would like you to stop saying "could care less" - by David Mitchell of Peep Show

https://www.youtube.com/watch?v=om7O0MFkmpw


I could care less about this request, but I can't be bothered to make the effort.


I am American, and I'm sure this bothers me more than it actually bothers the queen.

Between that and the incessant use of the incorrect phrase "for free" instead of the correct phrase "for nothing" it's amazing that I can stand to read anything on the internet.

xD


You should write a browser extension that fixes that. I'd use it.


Ooh, I like that idea. Don't tempt me!

xD


What's difficult about setting up a Redis cluster to back sessions? Yes, it adds a point of failure... so does having a database of any kind. However, I'd hardly call it difficult. If you're on Amazon, you can just create an Elasticache cluster and not even concern yourself with the ops.

I don't hate secure cookies or anything, but some people act like plain-old regular cookies haven't been thoroughly solved by this point.

Related: Something people get wrong a lot with secure cookies is worrying about obscuring the cookie more than securing it. Encryption does not give you authentication; you need MAC for that. An encrypted message can still be blindly modified. Imagine being able to change a UID stored in a "secure" cookie even if you couldn't 100% control which. Eventually, if you try enough permutations, you're going to escalate your privileges!


Nothing's difficult about that. But that doesn't mean that it's a good idea. How about distributing a signature and encryption key to all your servers, and using them to secure the outgoing and verify the incoming tokens. If you want easy, that's probably even easier than setting up a Redis store and tying your services to it.

Need an emergency revoke of every token? Easy: Replace your signature key. Any older token will fail signature verification. In which case, your system should require authentication and then generate a new token with an updated signature.


Here is the use case that lead to the first implementation of JWT I was ever part of.

You have a single page webapp that uses two APIs for part of the application. For security reasons the APIs are zoned in such a way that neither of them can communicate with each other. The machine of the user sits in a zone where it can send HTTP requests to the zones of either API.

Now design a way to manage sessions across both APIs...

There are certainly a number of ways to accomplish this, but JWT was the cleanest and most performant.


When the authentication info needs to be used with servers in different location of the world, JWT is better. Getting the session info from a redis server or something equivalent isn't free. Deciphering a JWT token may be faster. So, the most appropriate method depends on the use case.

If all the servers are in one location, a random byte sequence as session id key with cached info is the most simple, compact and efficient.


Or you can use read only Redis replication and keep your architecture pretty much the same https://redis.io/topics/replication


What happens when that replication breaks? Can people still log in? Can you still validate their sessions? Or are you going to have an outage in a geographical region?


Fall back to connecting to the main instance directly


And if the replication is down because the main instance can't be reached for a minute?


And if your user base is hit by a nuclear bomb?


The problem is you're suggesting real-world, ops-capable solutions to a problem "Devops" (as in, developers can do it, we don't need ops) people don't want to understand because they'd rather jump on yet another poorly designed "solution" to a problem they don't really have.


JWT signature verification usually takes less time than the network request to a redis server... assuming it's non-local, because HA.


I'll just say this: if the most expensive part of your API is calling Redis, it probably doesn't have anything worth authenticating for in the first place.


I can make an attempt at an alternative:

Distribute signing and encryption keys to all servers. Have them encrypt and sign the outgoing serialized token, whatever that consists of. Have them verify and decrypt the incoming token. This is just straight-forward cryptography, with keys known only to the server, so I'm pretty sure you won't get any arguments from (1). (And, I suppose the encryption could even be skipped, if you don't care that the internal format of the token is known.)

Emergency revocation of all tokens [0] is simply rotating the signing key. All tokens issued prior to the rotation will fail verification with the new key. That should trigger the authentication process, which will issue a new token with the updated key. This solves the revocation issue present in argument (2).

[0] Any other form of revocation is, in my opinion, not distinguishable from having server-side state. If you have to keep a list of bad tokens, why not just keep a list of the good tokens instead... And then it's only a short hop to the token being nothing but a key to lookup the full session state on the server.


This is exactly the solution I use. I use a secret server (Vault, using Consul as a backend) to securely distribute and manage the encryption keys.


And if you want to be able to authenticate users to a service that you do not want to send private keys to?


Then sign / encrypt the token with a private key and distribute a public key to the untrusted peers. Since you're just using bog-standard cryptography primitives you can change them at will to match your use case. Need to handle untrusted peers? Asymmetric keys are the answer.


Or, you use the asymmetric key/pair to sign the JWT, and lock your environment to only public keys signed by your DC's cert in your org. If only that was supported by JWT.. oh, that's right, it is.

Nobody has to implement the FULL spec, you only need to allow what your environment needs.


I'm not really sure what your point is, other than an apparently fervent desire to prove JWT's worth while speaking down to me. I didn't say that JWT couldn't do that... You're the one who set up the strawman of untrusted parties, then gleefully knock it down after I address the issue. You have contributed no other valid feedback to my proposal, just a defense of JWT which is not an answer to anything I ever stated.

I just want you to know that such tactics are not very appreciated from this side of the conversation.

What does JWT provide that using bog-standard crypto primitives in the way I described doesn't? Other than a name and a standard?


You do understand that this is fundamentally the exact underlying mechanic of JWT?


You call it the underlying mechanic. I call it the only necessary mechanic. If those additional mechanics are where the security issues come from, then just get rid of them.


No one is forcing you to accept/implement ALL possible aspects of JWT.. in fact, that's generally a bad idea... Only need to implement what you need. If a specific algorithm is bad, don't allow it...

Isn't this how HTTPS works, HTTPS today doesn't use the same SSL and algorithms allowed in 1996, it's evolved and changed in practice. The author isn't suggesting everyone just not use HTTPS because some possible algorithm has been determined to be weak is he?


> No one is forcing you to accept/implement ALL possible aspects of JWT.. in fact, that's generally a bad idea...

I think this is a very interesting, because it's basically validating the article's argument. People are going to feel safe implementing JWT because it's an RFC, without knowing where these "generally bad idea" landmines are. That's the dangerous part.

And yes, the same issues exist in SSL / TLS. And guess what? There's loads of articles just like this one stating how dangerous older modes of these protocols are. Articles like this and the discussions they spawn are exactly the kind of thing necessary to move the world forward into safer implementations.


> If I'm providing a REST API, then I'd prefer a token string that I could pass as a header value rather than forcing the use of cookies

Aren't cookies just strings passed as HTTP headers?


I hope someone can explain to me in practical terms difference between a session cookie string on a request and a token as header value.


Cookies are just string in a header. The difference is that unlike normal headers browesers treat cookie headers in a special way. They automatically add and remove keys from it, and they allow the server to set the header in a way that the client can neither see it nor change it (http only headers)


The downside being that the browser will attach it to every request; if you use cookies, you MUST be aware of this, or you are (IMO) pretty much guaranteed to write a CSRF vuln.

(I'm much more in the localStorage + Authorization header for this reason. I recommend [1] for reading. If malicious JS is running, cookies won't save you, since the malicious JS is capable of simply making the request itself, to which the cookie will automatically be attached by the browser. localStorage+JS eliminates CSRF. If someone XSS's you, the difference is irrelevant.)

[1]: http://blog.portswigger.net/2016/05/web-storage-lesser-evil-...


> I'm much more in the localStorage + Authorization header for this reason.

That's just exchanging one security issue for another. Now you have the ability for people to steal tokens after an XSS attack. And yes, that's significantly different from "can make requests on your behalf".

The correct solution is to solve the CSRF vulnerabilities by using CSRF tokens. Not to change your auth persistence mechanism.


CSRF protection:

* Use SameSite cookies (unfortunately, not yet supported by all browsers)

* Don't accept application/x-www-form-urlencoded, multipart/form-data, or text/plain at your endpoints, or

* Use CSRF tokens if you need to accept server-side rendered HTML forms


CSRF is the easiest vulnerability to avoid, a csrf token solves all csrf attacks.

XSS is a lot harder to protect against, one of the better ways to mitigate it's effects is to use http only cookies


Well, I keep running into CSRF vulns. in the wild, so…

XSS is avoidable by systematically having a framework that escapes any inputs that are run through it. (jinja2, on the server, can do this, though it defaults to not, which I wish wasn't true.) I'm not saying that XSS is much better that CSRF, really; I've seen these, too.

the point (and that of the linked article) is more that either you're not subject to XSS, in which case localStorage is strictly better than cookies (it is default-secure), or you're subject to XSS, in which case neither saves you.


The idea that Session Hijacking attacks are irrelevant when a user can use XSS to perform any action on the client is interesting.

Definitely if your service is a valuable target that hackers will spend the time to reverse engineer your client code to create custom tailored XSS attacks then protecting against Session Hijacking does seem to be pointless.

But session hijacking is considered to be a very common attack (though I can't find any real numbers anywhere, maybe it's not?), most services with low attack value will probably be better served by httpOnly cookies and csrf tokens that make worthwhile XSS attacks more time consuming then preventing XSS altogether, which is an enormous, continuous effort.

Also your implying that CSRF is hard to defend against (otherwise why do you keep running into it) but in the same breath saying that XSS is simple to defend against.

If people can't defend against CSRF (which is usually just a simple flag for most frameworks), they aren't prepared to defend against XSS which means getting into a security mindset in all things. A serverside template is not enough - XSS can manifest in headers, in clientside code, in third party code, in redirections and it is easy for a developer to mistakingly add a new attack surface.


One small correction: the client (i.e., the web browser or other web client) can see HTTP-only cookies just fine; code running in a conforming browser cannot.

But if I write some code using DRAKMA, urllib or net/http, and can see those cookies just fine.


The very next sentence:

> Although I suppose you could argue that a cookie is just another header value.

So... yes.


Ack - missed that - was visually jumping around some.

But yes, it is.


I grow more and more tired of posts touting engineering sensationalism.

Here is a point of order for developers who work on something realize it isn't idiot proof (them proof) out of the box and then want to write a sensational post:

Implementation and design are a core part of anything you do and considering the risks and accounting for them are part of doing business.

Having worked with large organizations that do active and passive scanning of the web I am constantly shocked how often we are contacting someone about basic SQL injection in their application... in 2017.

JWT is an incredibly powerful standard if implemented effectively but its not for the LAZY, it requires thoughtfulness where ever it is active.

JWT solves a serious and real problem that organizations face at scale which is why you see it implemented in systems like google sign in. Realistically its not going anywhere.

People love criticizing the movement towards stateless tokens on the web I find it pretty funny... crawl down the stack from their webheads and you usually come face to face with Kerberos managing auth within their networks...


The article very clearly is about the standard, not a particular JWT library.

Server-side session tokens stored in the database worked fine ten years ago, and they work fine today. No need to muck with the load balancer.

Stateless tokens are great too, and use two-factor auth when you need that extra layer of security. No need for newfangled standards; HTTP Basic remains a simple and effective way to convey that token.


Except there are now many instances where no single database server can keep up with request load. It's not fine in all cases today. Where I work now a single request from the user goes into a pipeline of requests (some can be parallel, others not)... our SLA is X, everything that adds up to the total request time counts. Adding even 2-3ms for each service layer to verify session keys is too much.

This is as opposed to < 0.1ms for verifying a JWT. JWT is a structure for stateless tokens... once you have a token, what does 2FA add? nothing. Also, some algorithms are insecure, so don't use them, or blacklist them... Or, better whitelist the algorithm you do use.


I've got nothing against stateless tokens. What I'm saying is that it can be a much easier and more effective pattern to add a second layer of security than to add complexity to the first layer (the token). I believe this is like the idea of defense in depth. For example, making signed-in users re-enter their password before performing certain actions may be preferable to introducing cryptography into all sign-in actions.


This isn't just for all sign-in actions.. it's for all API requests, and in some designs passthrough requests on behalf of a user to another server/service. It isn't just used for UI requests. It can also be used from Server to Server/Service requests... across data centers. You can do signed tokens/authentications without introducing many potential points of failure.


If you're on Java and using an ORM like Hibernate, then that user will be found in the second-level cache. This will eliminate the need for a database roundtrip for all requests after the first authentication. From that point on that particular user will be retrieved from memory.


Which will require session pinning for the load balancer, not to mention, I'm not using Java or a similar ORM. That will only help for a single instance of an application on a single server... not much help when you specifically don't want session pinning.


I agree that not everyone is on Java and using an ORM. But is it only useful for a single server? If you have multiple servers then you would also have a distributed second level cache which would eliminate the need for session pinning.


distributed, or duplicated... each server potentially making that DB request... depending on load adding at least 2-3ms, potentially more. If a given request to a single endpoint needs to touch a dozen more, not including resource lookups and when not everything is parallel... or across datacenters, from the colocated to aws, etc.. it all adds up.

Very short lived JWT mitigates this as the window for replay is reduced, over HTTPS by the time you can crack it, that window is effectively gone. The server can verify a signature on a JWT in a fraction of a second... far faster than a DB call... Not including replication issues.


For number 2, you could expire them by encoding some identifier based off a hash or key tied to the user object. Change that object and have the server reject the token if that meta data no longer validates.


Or have really short lived tokens, requiring regular refresh, and don't worry about expiring them... you can then delete the refresh token so it can't be found requiring full re-auth if necessary.

OAuth2 + JWT is fine... just whitelist the algorithms you allow and use HTTPS for all communications, even internal.


I feel like the argument against #2 is usually purely hypothetical in nature. I really do not have a problem maintaining a small lookup cache for revocations. I feel like the argument against doing this tries to take the form of all server-side kept state is bad when in reality it's sticky state and huge object graphs (read: memory consumption) that get stuffed into session objects that are the real evil.

A server with 1GB can hold a lot of JWT's in memory. Probably more than most of the people building services here have to actually deal with.


Sure, revocation lists are relatively small. But they need to be available to every server (replication), be proof against server/service restarts (durable), and checked with every request (highly performant). So, a good revocation list effectively requires a database. Not a trivial thing to implement yourself, and a weighty requirement for an otherwise stateless service.


JWT are even smaller than their size since you can revoke them by hash (although you should really just revoke by user ID in most cases).

Your tokens should generally have a rather short lifetime - then you can keep the entire relevant window of revocations in memory.

The implementation is not trivial though, that's for sure.


Hmmm, using a database (eg PG) for the authoritative information, with memcached in front sounds like it would be practical for most uses.


At which point you should probably ask yourself: "What value is keeping all of my state inside this token providing me?"


Probably not. If the Pg instance is replicated, as indicated above, it'll be challenging to keep the Memcached copy in sync. In other words, you can't just use the caching feature of your ORM, you'll need another piece.


Thanks, that does need further thinking about. :)


Postgres is not a good solution for this kind of data. I'd use Redis, but maybe there are even better products.


Could you just have per-server tokens? Wouldn't a single client tend to hit just one server anyway?


Wouldn't a single client tend to hit just one server anyway?

No? Maybe? It depends on your load balancer. Assigning a client to a specific server is "sticky sessions". Many of us don't want to tie a client to a specific server and prefer a completely stateless 12-factor-style mechanism where any server can serve the client and stateless tokens provide a mechanism to achieve this.


Not to mention the challenges with multi-region replication needs... to do this for every request along a server-server pipeline adds more latency still, since each request to the db means potentially 2-3ms on top of more complex requests, which all adds up.


> and stateless tokens provide a mechanism to achieve this

without revocation. What's wrong with tieing a client to a server, or co-located server? Either they are close enough to share tokens / sync fast, or not?


What's wrong with tieing a client to a server, or co-located server?

Nothing, if you can get away with it. What do you do if your server dies or is overloaded? The 12-factor patterns came to be for services running on ephemeral hosts in cloud environments. Stateless servers mean you can seamlessly serve requests from another server without problem. Sure, you can store the sessions in a shared resource (redis perhaps?) but this complicated failover and redundancy and may add latency.

Maybe this isn't an issue, maybe it is. If you don't need or want that, then just use normal sessions, for sure.

Revocation can be handled (although admittedly not as well as with sessions or stored tokens) through short TTL's and refresh tokens (which are stored, but only need to be looked up when the stateless token expires). Its not perfect, but its often a good enough tradeoff.


What if you are running dozens of services each specializing in its own domain? Do you proxy each service through a pool of central webservers? Or do you just stand up a central auth server and have each service trust that auth server?


The latter makes sense to me. Auth is a cross-cutting concern.


If you go so far as to maintain a revocation store that is checked on each request, you might as well just use that same store for full-blown server-side sessions.

By your measure, 1GB can store a lot of tokens in memory: 32B tokens + that again as metadata (e.g. user ID + TTL in Redis) = 15,625,000 tokens.


No, the session state itself may be orders of magnitude larger than an id. At 4 bytes per token you can store up to 250M ids per 1 GB. But session state might store kilobytes of data per user for roles, permissions, names, descriptions, links etc.

And you're overestimating the necessary size of a revocation store. Only a tiny fraction of your users ever log out or otherwise invalidate session via means other than TTL. You're looking at storing just a couple thousand 4-8 byte revoked session ids and ttls instead of gigabytes of session data.

If you have a security breach and need to invalidate all tokens, just reject all tokens with an issue date before it's fixed. And they all fall off anyway after a week (or however long the ttl is).


Or better, have a really short lived requirement for server-server jwts (I suggest even 1m, having a new one per request).

For client-server a 5 minute refresh is fine, as long as you do a lookup for refresh, so you can expire refresh tokens, requiring a full re-auth.


I found this video incredibly informative about how to effectively implement JWTs, along with security advice and a nice refresh-reissue process:

https://youtu.be/mecILj3p4VA?t=2m8s


> (1) Criticizing vulnerabilities in particular JWT libraries, as in this article.

The purpose of this article it criticize the standard, not particular libraries.


I don't see any valid arguments in the post. The issues raised are either mis-implementation or misuse of JWT. All I am getting is "JWT can be misused in such such way that makes your application vulnerable. And neither its standards nor libraries prevent that, so it sucks".

But when is the last time we see any technology successfully prevented people from being silly?


> But when is the last time we see any technology successfully prevented people from being silly?

You can never stop someone sufficiently motivated to shoot himself in the foot from doing it. But you can make it harder for those who would do it be accident by providing more safety features - in case of security this is usually seen as a good idea (safe defaults etc.)


But this is the biggest thing with any security-sensitive code or practice!

Do not give people options, do not allow algorithmic flexibility, do not have fallbacks, do not have backward compatibility, do not allow "testing" or "insecure" options, do not have complex state machine behavior.

All of these things are exactly what JWT or other "design by commission" standards like SSL suffer from and they have predictably lead to ongoing, at times unfixable security problems.


I agree. I've read the whole article and still wonder why I should stop using JWT.


You shouldn't. Simply check that the hash algorithm specified by the client is the one you used when issuing the token. In a side project, I simply hard code the algorithm [1].

[1]: https://github.com/teotwaki/grace-calendar/blob/develop/app/...

Edit: DYAC.


+1. I read the article and ended up with a TLDR where I expected some explanation and facts.

It's a good thing that cookies have never been used in a bad manner. /sarcasm



I use stateful JWTs for session management, storing them in localStorage. If someone can exfiltrate the token, they will get a week long authorization, as well as some identifiable information (username, name and role).

Probably I can achieve the same overall system with cryptographically secure session cookies, that are persisted in a database, or other store that is accessible across multiple servers. I guess it would amount to the same thing.

Originally I implemented it because:

* My systems are SPA's. Totally JS dependent from the word go.

* I felt like there would be some advantages to being able to establish certain claims without verification. Say for display purposes prior to server comms (show a list of multiple available sessions for example)...In practise this hasn't really been true. Generally I find in the end I am always checking and verifying anyway - without any huge overhead.

* I've always had a sort of fuzz of uncertainty about Cookies. They always felt a bit out of my hands. Thinking it about rigorously of course, people can switch off JS. They can switch off persistence.

* All my user's local data can be persisted in one place, rather than having to store a reference in the Cookie and then lookup in localStorage. In reality though the code for this is pretty trivial...

So overall while I don't know how right he is, I feel like maybe he has a point. Why not just use cookies?

Maybe it's just because as a JS dev, I want everything to stay within a JS universe...and for some reason Cookies have always felt outside of that to me.


If I can offer some advice in the other direction, don't use cookies.

I tried to do the right thing, use HTTP-only cookies set over an HTTPS endpoint only to find that it's stupidly complicated and has a lot of annoying edge cases. Turns out iOS's webviews don't like them, iOS in general doesn't like them to be on api.hostname.com if the app is on app.hostname.com, you can't validate if you are logged in or not without doing a web request (which is annoying as hell if you are trying to keep a "logged in" state in something like a react app), you need to deal with a bunch of stupid flags to get the damn browser to even let them go across domains, and a hell of a lot of other annoyances that I can't remember right now.

We are most likely moving to something like JWTs (stored in localstorage or indexeddb) soon because of these issues.


These "annoyances" are security features. They're there for a reason. Learn how they work and why they exist. Use them. Stop trying to treat them as bugs that you need to work around.


The fact that an iOS UIWebView doesn't allow you to set 3rd party cookies in spite of what the user allows in Safari's settings is a security feature?

The fact that I can't get an app hosted at app.hostname.tld to send a cookie to api.hostname.tld when both the app requesting it allows credentials to be sent in the XHR request and the server is allowing app.hostname.tld to send credentials with the header Access-Control-Allow-Credentials on all platforms is a security feature?

The fact that I can't purge an HTTPOnly cookie in javascript without making a call to an endpoint is a security feature?

The fact that cookies default to JS readable and work across http and HTTPS and you need to make sure to set flags like "Secure" and "HTTPOnly" or you will be open to all kinds of attacks is a security feature?

The fact that cookies are sent on all requests on that domain and preventing browsers from doing that is what brought me down this path in the first place is a security feature?

Yes, there are security features that cookies give you that are extremely useful, however the downsides, bugs, differing implementations, arbitrary defaults, and the need to know the right set of flags and headers to send to make it secure aren't features. Not to mention that you STILL need to do things like CSRF-Protection to actually secure it.


When you say validate login in a react app, what do you mean? Surely the only way to validate a login is to make a request. Or are you saying that your tokens never expire?


Not really validate, just maintain state. (probably should have worded that better...)

Of course the server is going to validate it every request, but it's nicer being able to fail "sooner" on the client side when we know we aren't signed in, or we have never signed in, or our token expired a day ago and we need to re-login, etc...

With HTTPOnly cookies we need to make a request to find any of that out, and when paired with redux and react it's very annoying to have to make a web request to get a small glimpse into what the state really is and try and maintain that in a JS value somewhere AND avoid flashes of incorrect state.

Hell with HTTPOnly cookies you can't even clear it without a web request!


well you can use JWT + Cookies... JWT request to https://api.somewhere.com/token-signin to get cookies for https://api.somewhere.com

(of course you 'can' but I'm not sure if this is recommended or not.)


Wouldn't you need to assign your cookies to the TLD for them to be accessible to both subdomains?


It would also send them to the CDN if it's hosted on a cdn. subdomain.


Definitely, there's a whole host of potential pitfalls from assigning cookies to the TLD. From the OP's post though it sounded like their issue was one subdomain being unable to access cookies of another subdomain.


It's more issues with "3rd party" cookies.

"app.hostname.tld" doesn't need to actually access the cookies at all, it just needs to make requests to "api.hostname.tld" which sets the cookies and then later validates them.

Unfortunately safari blocks this use case unless you have also been to "api.hostname.tld" directly and there doesn't seen to be any easy way around it (outside of allowing all 3rd party cookies...)

And while iOS safari now handles this (i think they allow *.hostname.tld to use 3rd party cookies for any other subdomains as long as hostname isn't a common provider or something?) it doesn't seem to work consistently for UIWebView or WKWebView hybrid applications. And the "allow 3rd party cookies" setting doesn't seem to apply to the web views either.


Ahh... yes I'm familiar. I've worked on a couple apps where Apple/Mozilla 3rd Party Cookie polices were a pain point. One option we used was an interstitial page that the user visited briefly hosted on the API layer. Another was switching from cookies to Bearer Tokens which is a whole other bag of worms.


We only wanted them assigned to the subdomain that needed them (api.hostname.com), and we set it up that way to make sure we wouldn't accidentally expose cookies to other domains down the line.


I think you should probably have used another common parent domain, like app.web.hostname.com & api.web.hostname.com, with the cookies set for *.web.hostname.com (or something).


That's weird. Cookies are part of the HTTP standard, no? That means iOS would be the one that is not respecting the RFC.

Kinda like Safari throwing exceptions when you're trying to access localstorage in incognito.


Ideally you need to use httpOnly cookies to store your JWTs too.


If the SPA is doing XHR requests then a localStorage is also an option. It has the advantage that the application can control on which requests the token is being sent, in contracts with cookies where they are sent for any requests on then domain.


Can't you do "*.hostname.com" asp.net mvc handles it for me and I have subdomains for each customer and they log in and operate in their subdomain. All special cookie flags are configurable so it keeps top security. Getting cookies across domains is something you don't want to do, so I have kind of idea that maybe, probably you are doing something wrong.

My advice is "use proper framework".


Another advice: You shouldn't be saving the JWT on localStorage for security reasons, have a look at the info here: https://news.ycombinator.com/item?id=13866965


Indeed the article in question links here as required prior reading: http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...

So my comment was in the context of having read both of these. In the link here he also strongly argues against storage of JWT in localStorage.


> So overall while I don't know how right he is, I feel like maybe he has a point. Why not just use cookies?

Because in a highly distributed system hitting a database to validate authorization is expensive and causes bottlenecks.

A cookie that requires you to hit the database does not solve that issue. Although if the cookie was signed some way that can be cryptographically verified then great. But then you are essentially re-implementing something you could be doing in a standard way instead.


I think the expense of validating authorization to a database can often be worth the cost. Having a dedicated sharded SSD DB system, or other fast cached DB system that is dedicated to checking and validating a cookie/token of a user for each request solves many problems, such as quickly clearing tokens in the case of a hack, and if there is a DB failure on one of these systems then the user simply has to login again and their token/cookie will be stored on another DB in the shard.

The extra overhead on each request of checking these credentials, especially when these requests are hitting the product's database anyway, are often worth the additional security.


cookies have one advantage over localStorage and custom headers though: They can be set by the server in a way that client-side JS code doesn't get to see or change them.

This makes abusing XSS vulnerabilities to get to the token slightly harder.


"Slightly harder" is right. You can always write a non-JS client app, e.g., using Apache HttpClient. At that point the client can do anything it wants with headers.


> Because in a highly distributed system hitting a database to validate authorization is expensive and causes bottlenecks.

Don't you have to do something similar to invalidate tokens anyway?


> Don't you have to do something similar to invalidate tokens anyway?

Not exactly..

1. Invalidation lists can be held in memory easier than an entire token database. And if the invalidation list is huge you can distribute a bloom filter across your nodes and use that to check before hitting the database.

2. As another poster pointed out. Bearer JWT tokens are meant to be short lived. If your implementation is ontop of OAuth use a longer lived refresh token to get a new bearer token every so often (say half an hour). So if you are OK with your invalidated tokens being OK for "up to" the expiry (so up to half an hour in this example) you only need to do strong validation on the refresh tokens.


you can use refresh-token to get 'access token' at https://site.com/access-token, and then use access token for API access.

Then, having the 'just-the-right-amount-of-short-expiration-time' for access token helps... maybe? :)


I probably got down-voted because I conflated cookie with session token cookies. You can of course have a cookie that does not require a database lookup to validate.

But JWT takes care of having the expiry signed in the value (in a cookie the expiry is more of a suggestion that a modified client could ignore). Combine that with low expiry JWT tokens and high expiry refresh tokens (subject to more validation) I think it is a clear winner.


A lot of people are talking about the "none" algorithm issue, but the more recent vulnerability[0] is more telling: The report to the working group mailing list[1] led to the point that the standard had a "security considerations" section in the RFC, and this particular issue was never covered.

And now there are difficulties around the fact they cannot update an RFC which people will refer to for years.

It's not a vulnerability in one or two libraries - it looks like just about every made the same mistake, which points to something much more broken.

[0] https://auth0.com/blog/critical-vulnerability-in-json-web-en... [1] https://www.ietf.org/mail-archive/web/jose/current/msg05613....


The "none" algorithm set in header is a well known problem and, for example, nodejs most used library automatically uses asymmetric keys when one is given, ignoring the header (https://github.com/auth0/node-jsonwebtoken/blob/master/verif...)

As long as the problem is known to the developers and the key is specified, I think the biggest issue of JWT is the lack of session invalidation (that is, if you log out your already emitted tokens are still valid until their expiration), but it's a good tradeoff for not having server sessions.


Session invalidation is possible though, by maintaining a (short) blacklist of tokens on the server. JSON Web Tokens can be given an ID (via the jti claim), and server-side these IDs can be matched against this blacklist. When you log out, you send a request to the service that your current token be blacklisted.

Because JSON Web Tokens are short-lived, the blacklist need only contain tokens valid for validity period plus a few seconds and remains very small (often empty).

If you use JWT to allow authorization on several server, then you do need to distribute this blacklist, so it is not a completely trivial solution. In the simplest scenario you might suffice with only maintaining a blacklist on the server that can refresh tokens (this means that when the token expires, a new one cannot be automatically acquired).


Yeah, and that's kinda sad because now you have to check a signature AND query a database!


s/database/in-memory-map/g should be fine - and suddenly it's pretty lightweight (subtracting service restarts and a highly available message bus of course :)


s/in-memory-map/cache-server/g if you happen to have a load balancer without sticky session.


In both cases there is a DB somewhere storing the list. The difference is that with the blacklist the server can keep an in-memory cache because it's so small. Sessions don't need to be invalidated atomically so the blacklist can be refreshed every couple of seconds.


Store it in a DB for persistence, but push it out to application memory. If for some reason you expect your blacklist to be very large (maybe, you have a massively popular API?), push a bloom filter of the blacklist instead of the actual list.

Now, you (probably) only absorb the DB hit on blacklisted tokens.


Two solutions:

1. As other posters pointed out. The blacklist is probably pretty small and can live in memory on your apps servers. If you have a distributed raft network or something to keep it in sync across nodes, even better.

2. You can avoid checking it against the DB unless the API call is sensitive (example: modifies data).


Yeah, of course you can do these things. I really meant to say, "there now exists server-side state for this" — I'm bothered by how existence of that state defeats the statelessness benefits of signature-based schemes, not the fact that I have to query a remote database.

Oh, and also: "only store a blacklist" does not work if you want to provide the "revoke this app you gave access to a while ago and now it's spamming" functionality like in most social networks.


Well you cache the blacklist and push updates, so it has no real performance cost. Just a tad more dev time


The none issue was highlighted by Tim McLean 2 years ago [0] and comes up in any trivial search about JWT. Surprised that anyone who chooses to use JWT is still getting caught by it as, as you say, any half decent library mitigates this.

For me, the log out / cross device session management issue seems to force a pattern of short expiry with self refreshing tokens. Commonly used devices feel always logged in, whereas uncommonly used devices end up needing a fresh log in each time.

0: https://www.chosenplaintext.ca/2015/03/31/jwt-algorithm-conf...


In terms of invalidation, I think a case-by-case basis is best, as it often is.

For example -

If some critical part of your app depends on a user's account or session being still valid, just do the check on that endpoint call (grab the sub/ID claim from the JWT and hit the DB, or similar).

The rest of the time - viewing stats/feed/whatever, admit that if the user had a valid token issued to them 5 minutes ago, it's probably OK to send them stats without having to check revocation (or whichever benefit of JWT you're exploiting).

Thing is, this at least gives you the /option/..


Exactly. The session invalidation has to happen using a session store or expiry header or something similar. In this regard JWT is not better than cookies.


> expiry header

JWT tokens have the expiration date embedded in the token. There is no way to force it to expire like you you can with cookies.

Although force is a strong word. Even with cookies if you tell the client to delete a cookie it doesn't mean it has to listen.


session invalidation is actually very easy to implement. Its important to think of it as a process instead of a builtin to the standard.

In most of our implementations we achieve this by differentiating between the session token and a request token. Requests that actually power the app use tokens that are very short lived. Request tokens are generated by the core auth server using the session token. A session can be invalidated at the core auth server which will then refuse to give request tokens to the bearer.


Session invalid is an issue with all Tokens.


This article is slightly misleading. One should ALWAYS be suspect of any article that states X is always bad and should be avoided. Google, Facebook, Microsoft and hundreds of other companies are using JWT in security critical software. If you've ever logged into an app with Google, you've used JWT.

All I see here is the author complaining about poorly implemented JWT libraries. It's not a problem with the spec, it's a problem with the implementation. XML-DSIG suffered from a number of similar issues and was arguably less secure than JWT because of the massive attack surface provided by all the specifications layered on top of each other.

Here's how you can use JWT Safely:

1. Standardize on what's allowed in the alg header and validate it, don't rely on the JWT library you're using to do it for you.

2. Make sure you're using a high quality JWT library, jwt.io keeps a list of JWT implementations and highlights gaps.

3. Understand that bugs in code related to token generation and verification can and lead to compromise of your application and potentially your user's data. Treat such code with great care.


Oh my god! Outrage! Superlatives!

Come on. By all means, criticize flawed implementations containing bugs and security holes, but drop the attention-seeking behavior of screaming loudly about how an entire standard is [insert string of superlatives here related to "worthless" and "broken"]. If you're going to make such incredibly strong claims, your arguments had better be up to snuff.

With good implementations (plenty of which exist), and careful usage (via good coding and design habits), JWT is a fine standard and it can save a solid amount of time when constructing the security portions of a system.

Shouting about how something is 100% flawed and should be cast into the flames may get you plenty of views and outrage cred, but (thankfully) it doesn't say much about the veracity of your analysis.


It would really help encourage the uptake of better alternatives like libsodium if it were standardized. Just referencing some random library can scare decision makers; whereas referencing an IETF official document or ISO standard makes them just take it as given.

The same problem exists with serialization formats - you have XML and JSON, both of which are standardized and have an "official face", although JSON was not born that way. Google protocol buffers are quite superior in many ways, yet as they are just some product of some company and not an actual standard, decision makers are scared of them.

Technology experts do not get to make all the technical decisions, so standardization matters, even if for the stakeholder feelgood factor!


... yes, but... sidenote, JSON is almost unreasonably easy to grok, and translates well into every web-abstracted language, making it the clear-and-away winner.

At the end of the day, tech doesn't win. Developer experience wins. By the time companies have the resources to fight for every iota of performance, they've already won because they shipped product faster than everyone else --- why? Their developers could move and iterate quickly.


"Send a header that specifies the "none" algorithm be used"

Why would an issuer ever let a client decide what algo to use?

"Send a header that specifies the "HS256" algorithm when the application normally signs messages with an RSA public key."

Again, under what circumstances would a header be used by the client to ask for a specific implementation?

What about encrypted client side cookies - would you let the client "send a header" to specify which key to use???

The only problems you highlighted are serious input validation issues and a naive, broken trust model.


> Why would an issuer ever let a client decide what algo to use?

JWT, like SAML, is made to support separate identity providers and the service providers. In the spirit of generality, this means the identity provider(s) could be from a different vendor, operated by a different organization. E.g., you could let users access their account on your service based a token issued from Google. But that means Google chooses the algorithm, not you!

And it's a standard, so you don't have to write any code of your own. Just import the right middleware for your framework and you're set!

So the temptation is there for library authors to support all the defined algorithms, and just enable everything by default to be as compatible as possible - after all, you can just look at the header to see which algorithm to use!


> The only problems you highlighted are serious input validation issues and a naive, broken trust model.

If these things are suggested in the standard and promptly followed by major implementations, then the standard isn't very good.


But they aren't. Nobody is suggesting to anybody to use NONE as the algorithm.


EDIT: the secondary spec describing the algorithms is at least clear on the use of none, I missed that at first: Implementations that support Unsecured JWSs MUST NOT accept such objects as valid unless the application specifies that it is acceptable for a specific object to not be integrity protected. Implementations MUST NOT accept Unsecured JWSs by default.

https://www.rfc-editor.org/rfc/rfc7518.txt

Still, my point about it missing from RFC7115 stands.

---- Original comment ---------

The standard says you should support NONE as the algorithm and that you should use the algorithm the client sends you, all the while completely failing to mention the issues with that, both in its Security Considerations section (which mentions even more "obvious" things like "use keys with high entropy") and in the description of the algorithm to decode a token (which initial implementers probably relied upon to get to a "correct" implementation). Sorry, that is a failure of the spec as well in my book.

If you spec something with risks, at least mark the critical parts clearly with "point away from foot".

A better standard IMHO would have suggested the API for the decode functions, making it clear that the algorithm used should be whiteli


I don't think the spec meant to read that you must allow the client to be able to forge tokens by accepting tokens issued by it without an algo or signature.

If you issue tokens with none, then you will have to accept them when clients send them back. This is obviously a very bad idea, but that's all th spec says. If the issuer chooses to be insecure, that is a valid choice.

If you issue tokens with a specific algo, and clients send them back with a different or none header, you know they have been forged.

The spec allows issuers to decide whether to use none, it doesn't say you must trust none tokens if you know you didn't issue them.


And the spec doesn't spell it out, and initial libraries implementations thus forgot to include things like "let the user specify which algos to accept". And if common libraries provide simple APIs, users expect that these APIs still provide good security.

A standard promoted as "the standard for secure tokens" should not aim for "You can use the pieces to build a correctly behaving system" or "the spec allows secure implementations", it should aim for "if you use this and follow some spelled-out basic rules you get fool-proof secure tokens" and make wrong usage as hard has possible.


> Why would an issuer ever let a client decide what algo to use?

Right, so, why is this in the spec?


The spec doesn't govern what applications can and cannot accept, it governs what contents are valid in tokens. 'None' is valid, that means my parser library will accept it, it doesn't mean my application must accept the token as valid.

Example: The fact that my service has an http stack which must parse a cookie header doesn't mean my app must accept its contents as valid. There's a lot of confusion on this thread about which components should/must do what things.


I guess I'm missing something here because it seems like the spec includes an ability that everyone here is saying nobody should ever use. Seems useless, by definition!


I'm really confused by this post, a signed JWT is issued by the identity provider (or API end point) and is then validated again by the API end point when part of an API call, usually as a bearer token in the header. The validation of the signed JWT is done via the API.

The approach I use is to have a 'use once' refresh token (long timeout) and a security token (short time out) and JTIs to hold a list of logged out/invalid (refresh token used twice) security token IDs.


> The approach I use is to have a 'use once' refresh token (long timeout) and a security token (short time out) and JTIs to hold a list of logged out/invalid (refresh token used twice) security token IDs.

Here's what I've never understood about this approach: the browser can send many requests at the same time, over the same (HTTP/2) or different (HTTP/1.1) connections. If, say, six requests hit your backend at the same time, all with the same refresh and security tokens, with four more queued up on the user-agent, and the security token is expired, how do you know:

1. that all ten requests are valid,

2. to revoke the security token once,

3. to generate one new security token,

4. to mark the refresh token as used,

5. to generate one new refresh token?

Is it as simple as granting some leeway on how long the tokens can be used after they expire/are revoked? Do you have some way of serializing requests on the client to prevent this from happening? Or do you assign all ten requests the same "batch" ID and tie them together on the backend somehow? Do you do a preflight request to refresh the security token if it's expired?


Complaining about OAEP when RSA-OAEP is perfectly safe seems needlessly straw-grasping, the other complaints (should) stand perfectly well on their own.

I've used JWT in three languages and the API has always sucked, really badly. I always end up with a verbose heap of gunk - and in some cases, like jwt-go, there is not even a complete example of use in the README + docs. mfw. It should not take multiple steps to sign or verify a signature.


I agree. The OAEP thing was weird.


In fact, JWE and JWS are not as flawed as JWT is, right? ACME uses JWS.


The author of http://blog.intothesymmetry.com/2017/03/critical-vulnerabili... here FWIW. Personally I would not be so drastic. JOSE per se is not too bad (at least the idea is cool). Some crypto choices though have been really arguable...


That post is great work. Thanks again. But I think you're wrong about JWT.

The problem with JWT/JOSE is that it's too complicated for what it does. It's a meta-standard capturing basically all of cryptography which, as you've ably observed (along with Matthew Green), was not written by or with cryptographers. Crypto vulnerabilities usually occur in the joinery of a protocol. JWT was written to maximize the amount of joinery.

Good modern crypto constructions don't do complicated negotiation or algorithm selection. Look at Trevor Perrin's Noise protocol, which is the transport for Signal. Noise is instantiated statically with specific algorithms. If you're talking to a Chapoly Noise implementation, you cannot with a header convince it to switch to AES-GCM, let alone "alg:none". The ability to negotiate different ciphers dynamically is an own-goal. The ability to negotiate to no crypto, or (almost worse) to inferior crypto, is disqualifying.

A good security protocol has good defaults. But JWT doesn't even get non-replayability right; it's implicit, and there's more than one way to do it. Application data is mixed with metadata (any attribute not in the JOSE header is in the same namespace as the application's data). Anything that can possibly go wrong, JWT wants to make sure will go wrong.

It's 2017 and they still managed to drag all of X.509 into the thing, and they indirect through URLs. Some day some serverside library will implement JWK URL indirection, and we'll have managed to reconstitute an old inexplicably bad XML attack.

For that matter, something crypto people understand that I don't think the JWT people do: public key crypto isn't better than symmetric key crypto. It's certainly not a good default: if you don't absolutely need public key constructions, you shouldn't use them. They're multiplicatively more complex and dangerous than symmetric key constructions. But just in this thread someone pointed out a library --- auth0's --- that apparently defaults to public key JWT. That's because JWT practically begs you to find an excuse to use public key crypto.

These words occur in a JWT tutorial (I think, but am not sure, it's auth0's):

"For this reason encrypted JWTs are sometimes nested: an encrypted JWT serves as the container for a signed JWT. This way you get the benefits of both."

There are implementations that default to compressed.

There's a reason crypto people table flip instead of writing detailed critiques of this protocol. It's a bad protocol. You look at this and think, for what? To avoid the effort of encrypting a JSON blob with libsodium and base64ing the output? Burn it with fire.


I did wrap JWT/JWE with a library lest somebody else in my company would get the bad idea to implement directly, and yes, the first thing I've done was to limit the signing algorithm to a single specified algorithm (HS256 for everything that doesn't have to cross service boundaries).

But I admit never thought of it as a particularly bad crypto protocol. It's actually pretty good compared to most negotiable crypto standards out there. I mean, yeah, JWK allows you to embed X.509 (but you don't have to do it), the value of supporting RSA is questionable, their selection of curves is regrettable (but all the world went with the NIST curves), compression on JWE (and not JWS?!) makes me suspicious, and yeah, JSON is kinda verbose compared to MsgPack et al.

But then I remember that before that before JWT came SAML+XML-DSIG (and the whole WS-* ecosystem) and CMS/PKCS#7 (and the whole ASN.1 ecosystem). Ah, compared to these JOSE is a fresh breath of air.

What a lightweight and uncomplicated serialization format - It couldn't even be used for amplified DDOS attacks!

What a modern set of ciphers (besides 'none', sorry, missed that on there ;). No more DES, RC4, or unpopular untested ciphers just there to fill the slots.

Yeah, JOSE still has some parts which are over-engineered but for a STANDARD it's the best one there. Management demands standards, it's a fact of life. And I'd choose JWT over SAML or CMS any day.


"Better than XML-DSIG" is not a reasonable bar. Your description of the protocol is as damning as mine is.


Agreed, but is there a better standard?

I'm also fine with just stuffing MsgPack or Cap'n Proto data inside a libsodium secretbox or usually just crypto_auth to be honest, but corporates love standard. Perhaps libsodium algorithms should be turned into RFCs and then we could have a much simplified token metadata format (expiry, issued date and their ilk) that can be separated from a completely freeform payload. I'd gladly push that format over JOSE.

We can cry "just run an arbitrary format through HMAC-SHA256/libsodium" until the end of the days, but it's the same as asking developers to just send a list of key-value pairs during the XML overengineering heyday. Until JSON came with RFC hammer on their head, developers went with XML by default.


Sure: use Fernet.

https://github.com/fernet/spec/blob/master/Spec.md

It's an informal standard, like Noise, or WireGuard, or Curve25519, or Nacl. It's also so simple that JWT nerds will likely believe it's missing something. It is: the JWT/JOSE vulnerabilities.

It used to be that we got things working and then standardized them. Now we build cryptosystems de novo in standards committees and spend the next 10 years writing papers about the resulting flaws. Ok, it didn't used to be that way, and we've always been writing papers about flaws in crypto standards. I don't know what to say about this, except "stop, somehow".


Yes, informal standards, but that's exactly the problem. At my previous work, I've implemented something similar to Fernet in the past (though using AES-GCM rather than AES-CBC+HMAC), and that's dead simple. But it's not standard.

Every time I've suggested modifying our JWT implementation to use Ed25519, or using any NaCL implementation for encryption instead of the vulnerability-footgun framework better known as JCE, I get raised eyebrows.

People want standards. Fernet is nice, but it should be pushed to an RFC level and offer more metadata besides a timestamp (not hard, just copy all the JWT claim names in stick a JSON into the ciphertext :))

It's also not useful when you do need asymmetric encryption/signature, and you can't just ignore these use cases, since people will keep JWT alive just for them.


As shitty as XML-DSIG is, it's pervasively used across almost every single language and platform imaginable. The goal of standards like JWT and XML-DSIG is interoperability above security. What good is perfect security if you can only talk to yourself?

PS: libsodium is great but the fact that it requires C bindings to use from a JVM app makes it a non starter for a lot of use cases.


I don't know how to respond to this in any other way than to say that it's unethical to build systems you know have security weaknesses.


1 - You have not demonstrated in anyway that it's impossible to use JWT tokens in a secure manner. Just that it's easy to shoot yourself in the foot.

2 - It's not unethical to make tradeoffs, period. We all build systems that have potential attack vectors and we make tradeoffs based on threat models. That's the difference between Academics and Engineers.

Example: Hacker News allows shitty passwords, that's a security weakness. However, the data that's protected by that shitty password is pretty meaningless. Is that not a good security tradeoff? Is Hacker News unethical?


What should I use instead? I pass around JWTs attached to HTTP requests that represent an authenticated user, and contain things such as a user's email, groups, scopes etc. I've tried to keep it simple (RSA, SHA256, nothing interesting), and use the subset of JWT that seems sane (basically the bits I see Google using in their JWT based OAuth flow)

I used JWTs because

1. I like the statelessness of JWTs (though I've learnt that there are many trade offs related to this)

2. OAuth uses JWTs, Google uses OAuth, and Google usually know what they're doing

3. I can attach custom claims

4. I don't know of any alternatives, other than x509, which I have less confidence on me being able to validate correctly than JWTs.

What would you suggest? An opaque token which I then look up against a central database/api?


SPKI (RFCs 2692 & 2693) offers a well-developed, well-thought-out framework which meets all your needs: SPKI certificates can contain state, and thus support server statelessness; SPKI certificates can be used as OAuth tokens; SPKI certificates support custom claims (and in fact go so far as to define a well-formed claim calculus which can be implemented easily, and which supports just about anything one would wish to do); and SPKI certificates are far, far simpler than X.509.

Take a look: https://tools.ietf.org/html/rfc2692 & https://tools.ietf.org/html/rfc2693


You're in luck! As I pointed out in the comment you replied to, JWT includes X.509.


I agree on all the accounts on what you said. I am probably biased by the fact I like JSON over XML. Probably JOSE just took the wrong path and could have been designed way better than it is...


JWT begs you to use public key because it makes sense for a lot of the use cases that people implement using JWT specifically having a single token issuer while having distributed token validation.

Using a public key algorithms makes also it easier to implement a sane key rollover strategy. I suspect this is the reason that Auth0 pushes their customers to validate tokens with public keys published on their JWKS endpoints.

As for X.509, I agree it kind of sucks but what are the alternatives?


The alternative is pushing plain public keys over an authenticated channel. You usually don't need the complexity of X.509.

That being said, the aforementioned authenticated channel will more often than not be TLS, which does happen to rely on X.509.


> The ability to negotiate different ciphers dynamically is an own-goal. And JWT doesn't prescribe a negotiation.


You are making the same mistake the IndexedDB haters made. The standardization effort around JOSE, as far as I can tell, is about making the browser a place where you can run crypto. They want it to be composable because that's the web way.

I can agree with your critiques but still wish "real" cryptographers would accept the inevitability of a worse-is-better approach winning here. Don't flip tables, write the jQuery of web crypto. You'll do more good in the long run going with the flow on this one.


> The standardization effort around JOSE, as far as I can tell, is about making the browser a place where you can run crypto.

The problem is that the browser is not a place where one can safely run crypto.

> Don't flip tables, write the jQuery of web crypto. You'll do more good in the long run going with the flow on this one.

That's a bit like advising a vegan to invent a better method for slaughtering cattle (n.b.: I am not a vegan and have no problem killing & eating animals). The problem is that no-one who understands security thinks that in-browser crypto is a good or safe idea, and thus no-one who understands security wishes to help it along. It should be stopped, not made slightly less bad.


For my current use-case, one of the appealing things about JWT is there are libraries for just about every language, which makes it easy for 3rd party developers to integrate with my service.

Are there any better alternatives to JWT that have implementations in many languages?

If not, elsewhere in this thread tptacek and others have suggested essentially `base64_encode(crypto_auth(json_encode(object)))` would be sufficient... is there any reason not to just slap a name on that "standard" and publish a bunch of libraries?


> is there any reason not to just slap a name on that "standard" and publish a bunch of libraries?

It was called JWT.

  jwt.decode(token, 'secret', algorithms=['HS256'])
That's the Python for JWT; encode is similar. You can try something like what you suggest, but the devil is in "object": want your token to expire after a finite amount of time? You'll need to encode that yourself. Token need to be valid only for certain cases? You'll need to encode that yourself. Essentially, you end up reinventing the part of JWT that is relevant to your use-case, and hopefully, arrive at a decent API.

At least in Python, python-jose's APIs will check not only the signature, but these additional claims (expiration of the token, that the token is applicable to the use we're verifying it for).

(I still think we should be moving towards a common API (and JWT is good enough) and building libraries around that standard. They're going to fall short in some ways at first, and I wish people would help improve them. The suggestion of using base64/libsodium feel dangerously close to "rolling your own", because its too low-level for the purpose at hand.)


I was confused about libsodium/NaCl APIs, specifically crypto_sign vs crypto_auth. The difference:

1. `crypto_auth` is for secret-key signatures (auth): https://download.libsodium.org/doc/secret-key_cryptography/s...

2. `crypto_sign` is for public-key signatures: https://download.libsodium.org/doc/public-key_cryptography/p...

And tptacek is arguing secret (symmetric) key is preferable: https://news.ycombinator.com/item?id=13866983


Symmetric signature is simpler, leaner (in message size overhead), faster and more secure (by virtue of it being simpler).

But there are still cases where you would choose asymmetric signatures over symmetric signature, due to the very essence of it being asymmetric.

The rule of thumb is that when you want to produce a cryptographic token that will be consumed by parties which you don't trust, you should use an asymmetric signature. Realistically speaking, the untrusted party could (and very often should) be almost any other service inside your own company. If you let symmetric keys spread around, you should treat them as good as if they've been leaked.

There is an alternative that if you're able to (and willing to) manage shared secrets through a safe out-of-band channel (e.g. deriving from client secrets).


So JWT is bad because there are bad implementations and there are dumb people who shoot their feet^W^W^Wdon't force alg. Seems like doing software development for 13 years leads to serious problems with logic. There is also confusion between sessions and session storage. Meh..


This post says "it's insecure if you do it wrong".

Well.....


I hear where you're coming from... but this is also the bane of developer existence. We all have to accept that, every year, tens of thousands of new developers looking for jobs enter the market. There's such a demand for developers that these people get jobs. So footguns, as much as we like to play high-and-mighty and say, "well, duh, don't shoot yourself in the foot" are a real, existential risk to a lot of companies.


I was under the impression that newly minted developers would use existing libraries and frameworks, which have already take security into account.


The article points out that many popular libraries have vulnerabilities and unsafe defaults.


Which to me says that relying on there not being any footguns is wishful thinking. The better recourse, to my mind, is to stress the need for mentorship, so people learn to proactively look out for traps.


A standard that best case doesn't explain the risks properly (so many implementers get it wrong) and worst case prescribes dangerous behavior isn't a very good standard. Especially in a field where many developers are told over and over again to rely on standards, it really should spell out even tiny issues.


One advantage I think not mentioned by some of the linked articles is that the JWT's claims are readable on the client.

It's a pretty good plus, for me: no additional round-trips to the server to grab key user details, which can be put into claims, or check access levels (via roles, permissions, or other types of claim).

This doesn't discount the disadvantages, of course.. I think as with everything it's a case of the right tool for the job. "Depends on the use case".


Out of interest, do you check the authenticity and integrity of the JWT on the client side?


We provide an endpoint to check validity with the server, but haven't used it too often. Anything "reasonably sensitive" (or more) doesn't depend on anything like this client-side security.

But, if you're just hiding an additional Delete button on a page based on claims, this comes in handy.

(Edit: in one case, we've used asymmetric keys, i.e. public key so everyone can check integrity. This was a very different use-case to most web apps, though. Overall I'd say if you're carefully checking integrity of something in client-side JS to do something, I think that's probably the wrong approach)


That's exactly what is useful for. Of course access to a resource is determined server-side; JWT simply allows you to adjust the UI to the permissions the user has without any additional calls. If the user changes the JWT he has client-side, he will just get a broken delete button (the server will reject a JWT that has been tampered with).


You've already been to the server once, if you terri-bad app design requires you to go twice that's more your problem than a "feature" of a broken session system.


Fine, then - to rephrase, it conveniently combines claims with an assertion that the user has been authenticated and authorised. You can do it in other ways too, but it's convenient and a designed part of the make-up of JWT.


Is using none a bit like using http instead of https? The standards support it, but it's 2017 so we shouldn't do it.

What I like about JWT's concept is it's completely distributed authorisation: there's no call to a central identity provider. Thus a SPA can pull initial security info from its server, and then fire out requests to different APIs. As long as the API endpoints have the SPA server's public key, they can verify everything without calling it or another central server.

Having said that, I'm not able to discern whether it's secure enough to be workable, so I only know to mandate a list of good algorithms on the API endpoints and to use SSL :) I'll have to read about this session stuff.

EDIT: I wonder if in bigger projects, a message bus or in-memory cache could signal a token blacklist once the user logs/times out of the original server? Or as some others have said, just have short expiry times and ping the SPA server for new tokens every couple of minutes.


I use JWT in a couple of projects and it never once occurred to me to let the client decide the algorithm. I am not sure what use-case would necessitate something like that.


Guess that shouldn't be in the spec then.


Can anyone think of some criticism for simply storing an API key and secret in localstorage for a web client? It's 50% bridging the gap between using cookies and a normal API and 50% simplifying frontends. The scheme I'm currently using on a project goes like this:

1. Web client ("offline-first" SPA app) hits HTTPS backend in with username and password

2. Web client receives a a generated API key and secret, which expires in a week/month/whatever.

3. That API key and secret gets stored in localstorage on the client-side by the web app for future use (as long as they're logged in)

4. Web client includes the API key and header in requests to the HTTPS backend of the app.

Of course, there are more specifics that could be added like device fingerprinting, invaliding old web-created tokens when a new one is created, and classifying api keys/secrets to certain devices, but I think those things are ancillary.

This is obviously very very close to what a cookie would be, and the only way I could see it going catastrophically wrong is the browser being compromised (whether the vector is XSS, or some other leaky surface on the user's computer). Regular cookies and JWT have the same issues.

I can't think of a failure mode that's any worse than HTTPS cookies or JWT, and it is dead simple. I've really been trying to find some flaws in that plan lately but I can't.


Perhaps I am missing something but I really don't see the point of storing it in localStorage or sessionStorage over cookies what so ever.

1. You have to write code to provide the authentication values in all requests.

2. The GET request for the initial page render can't possibly be authenticated.

Why not cookies? Other than that I agree. I really don't see the point of following the JWT spec or using an implementation of it when it has been shown that these implementations are poor (problems with none algorithm & asymmetric keys).

Fundamentally, what we are talking about is simply a claimed identity, verified and signed by your backend. This is a sound principle. Just implement that and your attack surface is considerably smaller.


The reason I wanted to try this scheme was to finally remove the little difference in authentication method between web frontend and commandline/mobile API client...

OWASP says not to do it (https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet#L...) but the stated reasons are kind of vague/I'm not sure the reasoning is sound.

If someone has physical access to the machine, all bets are off, and if a XSS vuln happens, pretty sure people can get whatever information they're looking for (including the cookie) anyway. The only real objection was the inability to restrict to HTTPS (path based, everything is just based on same origin regardless of scheme I think), however same origin policy still applies like normal.

I wasn't trying to recommend localstorage over cookies, more like just trying to see if there's any huge blindspot I was missing, everywhere I look says not to do this, but the reasons were never very satisfying.


The biggest flaw for me is that any 3rd party script can access your localstorage without your knowledge, while it can't access HTTPonly cookies.


Thanks that's a valid point -- I previously misunderstood the HTTPonly cookie setting to mean restricting the cookie to HTTPS, but that's path-based restriction just now read about HTTPonly (https://www.owasp.org/index.php/HttpOnly).

To be honest though, once there's an unauthorized 3rd party script running on your page, that's a pretty dire already though. I guess it's possible to also protect a little from malicious web addons/extensions though.


It requires looking up the secret on the backend. The JWT doesn't require that.


I think that JSON Web Tokens (like most things involving JSON) are ill-thought-out, and they can definitely be a bit of a foot-gun, but they are also useful.

I do take issue with the idea that they're not good for stateless authentication: I think they're great when used as short-lived authentication tokens (which don't require serve state) with accompanying long-lived refresh tokens (which do require server state). E.g. a system in which auth tokens are good for an hour and refresh tokens are good for longer (and a refresh token can be refreshed) offer a pleasing user experience (in the normal case, one need never log back on) while also preserving security (revocation takes at most an hour to come into effect). The business gets to make the economic decision about the tradeoffs between risk and cost, deciding whether auth tokens should last for a day, an hour, a minute or a second. I don't think this is 'congratulations, you've reinvented stateful sessions'; rather, it's a well-designed system.

I do wish that JWTs had been better designed, and I wish that folks didn't have to be so careful using the libraries which support them.


If you are interested in a Ruby library for signing and verifying a token containing a simple payload, take a look at our Slosilo library:

https://github.com/conjurinc/slosilo#signing

We use this library to store a signed username. Nothing more than that, just a username in a signed, expiring token. No `alg` field or any other options.

The Slosilo code has been subjected to a professional crytographic audit, so it's safe for you to use. Unfortunately, without an NDA, we can't show you the audit report, that's just how these things work. The only audit finding was a recommendation that we switch to `AES-256-GCM`, which we did in Slosilo 2.0, November 2014.


>> A lot of developers try to use JWT to avoid server-side storage for sessions.

This is based on what? Sounds like he just made it up. His other claims does not look sound to much more either. I would like to see a more in-depth analysis on the subject, this all looks very hand-wavy to me.


> This is based on what?

Based on the entire reason JWT is even a thing? Developers love to believe every app they build is going to run at the scale of Facebook to the power of Google times Twitter, and thus needs to run on 10,000 Docker instances spread across 15 data centres around the globe (and soon, one on the moon!).

Relying on server-side sessions is "terrible" because you have to talk to the backend, and you need to keep the data synchronised in a manner that all 10,000 Docker instances can read/write to it instantly. So instead, a new concept was devised, whereby you use these stateless tokens that don't rely on the same server after issuing.

Of course, it's impossible to invalidate them individually, and they're either insecure (available to JS) or stored in a cookie, and thus sent with every request, which means, due to their larger size than regular session cookies, more data on each request.

So.. that. That is what it's based on.

Edit: added missed word "same".


I wouldn't say it's impossible to invalidate them individually. It's certainly more effort, and it's probably better to have short-lived session tokens and refreshing, but I think it can be done.

E.g. what about a message bus that publishes an invalid token message that is subscribed to by the API-providing systems, so they can maintain a prematurely-expired tokens list?

On the keeping info in Javascript vs keeping it in a cookie issue, I don't understand that so well. If you made the token a private member of an object that was responsible for the calls, would that help? Then no code could access it?


> I wouldn't say it's impossible to invalidate them individually

It's impossible to invalidate individual "stateless" JWT's.

If you have a server-side "blacklist", guess what: you're not stateless any more, because you still need to keep data in sync, and now you're tempted to allow some otherwise unacceptable delay for sync, giving a potential attacker more time with a stolen JWT. Plus, you know, defeating the whole purpose of using JWTs (being stateless).

> On the keeping info in Javascript vs keeping it in a cookie issue, I don't understand that so well.

Cookies can be set HTTP only. They're sent to the browser, and it will send them back when making requests as per usual, but they're not exposed to JavaScript, at all. There is 0 way for malicious (or non malicious) client side code to see these cookies, thus 0 way for malicious javascript to steal one used as a session cookie.

> If you made the token a private member of an object

If the data can be read from the network by your JavaScript, it can be read from the network by their JavaScript.


(Feel free to bail on this at any time if my questions/suggestions become tiresome :))

> If you have a server-side "blacklist", guess what: you're not stateless any more, because you still need to keep data in sync, and now you're tempted to allow some otherwise unacceptable delay for sync, giving a potential attacker more time with a stolen JWT. Plus, you know, defeating the whole purpose of using JWTs (being stateless).

While I agree that it's no longer stateless, JWT's still really useful in terms of not needing a centralised auth/auth provider that everyone has to hit to see if I am who I say I am and whether I'm allowed to call an API. And a message bus is a pretty good compromise between the extremes of big wide systems that share state and microservices that don't talk to anything else.

> Cookies can be set HTTP only ... session cookie. Thanks - I get what you're saying.

> If the data can be read from the network by your JavaScript, it can be read from the network by their JavaScript. Yeah. I think I see what you mean. Assuming you don't mean literally "reading data from the network", as I assume the problem isn't the network access but the access to the security info, are you saying that hostile Javascript on the page can read everything and call everything that legitimate Javascript can?

If so, I can't tell the difference between that and - say - a CSRF token, which presumably can also be read by "their" Javascript? How does anything work if you have that mentality?


> not needing a centralised auth/auth provider

The vast majority of people don't need the scale that is difficult to achieve with regular server-side sessions, and that JWT claims to "solve". They add complexity to solve a problem most people don't have.

> I can't tell the difference between that and - say - a CSRF token, which presumably can also be read by "their" Javascript

CSRF is about e.g. making a user's browser make a form submission that results in a request which is malicious in some way. CSRF Tokens are embedded in each legitimate form to ensure that the submission received came from a form you control.

If the attacker has JavaScript access to your page, CSRF is not your problem, so CSRF tokens can't help you.


> The vast majority of people don't need the scale that is difficult to achieve with regular server-side sessions, and that JWT claims to "solve". They add complexity to solve a problem most people don't have.

Not really talking about sessions, but I think I see what you're saying.

> If the attacker has JavaScript access to your page, CSRF is not your problem, so CSRF tokens can't help you.

I agree. I'm trying to understand what you were saying about whatever your Javascript has access to, their Javascript does as well. Why should this be a criticism of JWT and not CSRF?


> Not really talking about sessions

Well, mostly a JWT lets you know the user that is signed in.

A server-side session generally does the same thing, but can be used to store larger amounts of data.

> I'm trying to understand what you were saying about whatever your Javascript has access to, their Javascript does as well. Why should this be a criticism of JWT and not CSRF?

They're unrelated attack vectors.

CSRF is about bad actors producing links and/or forms on a different site to your own, that a legitimate user clicks/submits (either through social engineering or some kind of javascript in their page) causing them to make a request to your server. A CSRF token prevents this because it ensures that form submission requests have come from a form hosted on your server.

In the situation where we're worried about what someone else's JavaScript has access to, it means their javascript is already loaded into your page: a vulnerability with poorly escaped user content, a rogue browser extension, a malicious or compromised CDN, etc.

In that situation, CSRF is irrelevant. The CS in CSRF is "Cross Site" - this is no longer cross site, as the script is running in the context of your own page.

So in this situation nothing we do can prevent them from making requests within the current user session.

But what we can prevent them from doing, is stealing a user identifying token: e.g. a session cookie, by marking it HTTP Only, so the JS environment doesn't see it.

JWT's accessed over XHR/etc and stored in local storage are available to any malicious scripts running, meaning they can grab the user's JWT and send it off to their own server, allowing them to make requests as the user.

If you send JWT's as cookies and mark them as HTTP only, you've defeated the "don't send session cookies with every request" goal of JWT, and the cookie will be bigger than most session cookies.


> Well, mostly a JWT lets you know the user that is signed in. In my case I'm happy to use non-JWT methods to hold a user's session information (e.g. an HTTP-only cookie) and just want JWT to authenticate with other systems' APIs without needing to centralise auth/auth.

> They're unrelated attack vectors. Good point. I guess I more just meant that what can malicious Javascript do with endpoints protected by JWT that it can't do with endpoints protected another way.

> JWT's accessed over XHR/etc and stored in local storage are available to any malicious scripts running, meaning they can grab the user's JWT and send it off to their own server, allowing them to make requests as the user.

I guess this answers the above question: the difference isn't in what can be executed in the browser, but what can be shipped to a different server to be used in attacks from there.

To mitigate that, then, how's this setup:

1) User's session is maintained in HTTP-only cookies. 2) Browser can use (1) to request a JWT token (and a refresh key) to hit a 3rd-party API endpoint. The token is valid for 5 minutes. 3) Browser can use the refresh key from (2) to request another JWT token.

Does that pretty much bring it up to parity with using cookies everywhere, while keeping the goal of noncentral auth/auth?


Agreed, the author of the article shouldn't point out a few flaws/bugs that some JWT libraries had in the past and then deduce that the whole standard is broken at a fundamental level. JWT is not designed to hold sensitive data, it's designed to hold non-sensitive authentication information like usernames, access groups, privilege levels, and other similar non-sensitive identifying information. It's useful because it loosens your reliance on back-end memory stores like Redis to track session data and makes your architecture much cleaner/simpler.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: