Hacker News new | past | comments | ask | show | jobs | submit login
Things to Use Instead of JSON Web Tokens (inburke.com)
376 points by LaSombra on May 8, 2017 | hide | past | web | favorite | 195 comments



> The problem with JWT is the user gets to choose which algorithm to use.

Only if you completely bungle the implementation on the server-side. The 'none' 'algorithm' isn't supported by up-to-date JWT libraries with a good track record, and you should always limit the algorithms you'll allow on the server. So if you sign your tokens with a RSA-2048 key-pair, you would discard any token that isn't using that algorithm.

Of course if you are building an API that blindly accepts whatever it receives from a user agent you are bound to create a security gap — but that holds true for anything users send you though, not just JWTs. JSON Web Token is not that hard to grok, and it isn't a 'foot-gun' technology (just practice trigger discipline — i.e., read the documentation).

I'm a Java guy, so I'll limit my experience to the libraries available there, but of the four Java libraries available, three provide strict validation of the signing algorithm out of the box, and explicitly document this in their examples and documentation (I think the fourth does too, but I haven't tried that one myself).

JSON Web Token is a neat standard that has a lot of good parts that can reliably be used to create and process authentication tokens. So if you are still worried about developers getting it wrong, then instead of saying 'don't use JWT', why not promote a safe subset of the specification instead and promote that? Call it 'iron-jwt' or something. It beats rolling your own solution.

Or if you want to be particularly constructive and feel that developers are misusing this technology, write a sensible, short, to the point implementers guide for using JWT and spread the word.


> Only if you completely bungle the implementation on the server-side.

Which happens with regularity. This is exactly what led to the vulnerabilities mentioned in the article.

This is a bit like arguing against the claim that generating SQL queries by concatenating strings is unsafe. "Only if you completely bungle escaping the parameters", right? As it turns out, that's an extremely common mistake. It's easier to use a known-safe practice like parameterised queries than it is to rely on developers avoiding this pitfall each and every time they have to execute an SQL query.

Likewise with this. We know that negotiating the algorithm is subject to mistakes, and it offers no benefits. So instead of relying on developers avoiding this pitfall each time, let's avoid the pitfall altogether and get rid of negotiation.

There's a human aspect to security that's being missed here. Saying what boils down to "well developers shouldn't write insecure code" doesn't actually stop developers from writing insecure code. You can point the finger after the fact, but if you want to actually improve security, we need better standards than this.


> This is a bit like arguing against the claim that generating SQL queries by concatenating strings is unsafe. "Only if you completely bungle escaping the parameters", right?

No, you document how it should be done. Any major database layer using SQL provides parametrised SQL queries, and strongly suggests developers use that (usually in the quick start guide). You can still do your own concatenation, but aren't advised to.

The JWT libraries that are up-to-date and well-documented don't recommend blindly trusting the algorithm set in the header either, they recommend safe approaches such as configuring your whitelist and letting any unlisted algorithms fail. For example:

https://github.com/auth0/java-jwt#verify-a-token

Only if you completely refuse to read even the basic 'getting started' documentation will you get to this kind of weird vulnerability. The issues with this part of the JWT standard have been addressed by the libraries, what remains is just a little bit of effort on the part of the developer — which may be expected from someone writing security critical code.

That is, don't blame the hammer for the shoddy carpentry.

> So instead of relying on developers avoiding this pitfall each time, let's avoid the pitfall altogether and get rid of negotiation.

So write a concise pamphlet that can easily be shared with all of the JWT libraries in an issue/bug report. If there is a good argument for having to explicitly whitelist algorithms, they might welcome the suggestion.

Some libraries already do this mind!


Reminds me of the problem of fatal tractor rollovers on farms. Should we fix the problem with better documentation, better training, or better tractors?

http://www.sciencedirect.com/science/article/pii/S0925753596...

http://agrivita.ca/program/lowcost.php#LowCostRollOverProtec...


I think all three should be used. But this isn't the question being addressed, which is whether to use tractors at all...


Even if better training or better documentation is available, how do you get the driver of the tractor to benefit from it?

The solution is better tractors. Don't blame the user.


but why is that question being asked? because of safety / security...


I just documented how it should be done, in the article at the top of the page. The answer is "don't use JWT." Use specific tools for the thing you are actually trying to do, that have API's that are harder to mess up.


By that logic the answer is also "don't use SQL".


More like "Don't use SQL, if a thing adjacent to SQL exists that is better suited to a particular use case, doesn't have a history of poor implementations, and is harder for implementers and end users to screw up."

In this case:

- crypto_auth, or HMAC-SHA256 by itself, for authentication

- crypto_secretbox for symmetric encryption

- crypto_box or TLS for public key encryption


Good JWT libraries essentially have a "box" and "unbox" function; the work required by the client needed to go behind the library's back here is on the same level as that needed to behind crypto_box's. Further, crypto_box works on a lower level than JWT encode/decode functions typically do, and would leave many concerns that JWTs handle in the user's hands for them to handle, alone. Having the user write their own code to handle those concerns is a terrible idea.


I'm with you on this. I've never thought of JWTs as a session storage mechanism; rather, I've always thought of them as a simpler (JSON-based) alternative to technologies like X.509 attribute certificates and SAML assertions. If you're creating a ticket / verifiable claim, there are a bunch of security considerations that JWT at least lays the groundwork for, and that NaCl secretbox doesn't appear to offer. Curious what OP thinks about putting things like payload canonicalization, subject/issuer/audience specification, and expiration entirely in the user's hands. I suppose its up to implementors to do appropriate validation, but at least JWT simplifies the conversation a bit.


But what happens is that sql version becomes sqlite and it doesn't provided enough power or flexibility.


> Use specific tools for the thing you are actually trying to do, that have API's that are harder to mess up.

That sounds like it would require a greater amount of knowledge to do correctly than "careful JWT".


"Careful JWT" implies you know what you should be careful about. If you know that much, you should just use the better things.

It also ignores the role JWT plays in promoting bad security practices


Never underestimate the ignorance (or stupidity) of a developer under a deadline.


And that hasn't happened with similar authentication systems... I'd be surprised if there weren't a lot of broken OAuth/OpenAuth/OpenID systems with similar issues in the wild. Most of the issues I've seen with JWT I hadn't experienced because I hadn't thought to even allow such variety... in one case, the public key was cached from a fixed URL, and in others, shared key was forced in place.

It's only when you leave things open, or use a poorly implemented library that you have issues. It's a really easy standard to understand, and frankly most of the library implementations suck, but it's really easy to roll your own, and lock it down to the single implementation details you use internally.


> Only if you completely bungle the implementation on the server-side.

I call this "blaming the user for the designer's error-prone cryptographic designs".

The problem with JOSE (the superset of specifications that includes JWT) isn't libraries written by careless developers, the problem with JOSE is the standard itself.

https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba...

If in doubt, ask a cryptographer.

The problems, for people who don't want to read articles from comment links, are:

  - JSON Web Signing
    - alg headers
    - "This Header Parameter MUST be present and MUST be understood and processed by implementations."
  - JSON Web Encryption
    - RSA with PKCS1v1.5 padding (power word: Bleichenbacher 1998)
    - ECDH over NIST curves (and, in practice, invalid-curve attacks)
    - AES-GCM included in a list of asymmetric algorithm choices, for added confusion
    - AES-GCM for shared-key encryption, without guidance over nonces or key rotation
A better solution from JOSE would only give developers two options:

  - Version (v1, v2, v3, etc. which hard-coded the algorithm choices)
  - Operation
    - enc -> crypto_secretbox()
    - auth -> crypto_auth()
    - pub-enc -> crypto_box_seal()
    - pub-sign -> crypto_sign_detached()


Specifying "v1" "v2" or "v3" is not materially different than specifying the exact algorithm by name, which is what JWT does.

You (and the linked article) are selectively quoting the JWS specification too, to imply that the server needs to always handle a token presented to it, regardless of the specified algorithm; the article is misleading the reader by doing so. The RFC also states,

> Even if a JWS can be successfully validated, unless the algorithm(s) used in the JWS are acceptable to the application, it SHOULD consider the JWS to be invalid.

As for the section about "MUST be understood and processed": the standard is simply saying that implementors must use the "alg" field. You can't ignore it. Extending that to "must process anything the client sends" results in nonsense.

The suggestion to "use NaCl" ignores the entirety of JWT claims, and foists implementation of that functionality onto every single consumer that needs them. (And you hope that they recognize that they need them.) Centralizing implementations of cryptographic code into a few, well-vetted implementations is better than just throwing our hands up and telling the community to fend for themselves. While some JWT implementations had bugs in the past, these same bugs could easily be present in the custom implementations of your proposal, of which there would be many.


You're missing the distinction between algorithms, which rarely have flaws, and protocol constructions, which often do.

Algorithm negotiation within an otherwise static protocol incurs complexity, which has a security cost. Just as importantly, it doesn't address the real origin of cryptographic protocol flaws. Note how, in order to correct cryptographic flaws in TLS, we had to push people first from SSL3 to TLS 1.0, and then later to TLS 1.1.


Certainly it does, but what would you suggest JWT do differently? One can't realistically hard-code the algorithm(s) in use: that would prevent any ability down the road to upgrade to a better set.


Once again:

1. "Hard-code" the simplest possible sound crypto construction that solves the specific problem the protocol is meant to solve.

2. Put a version on the whole protocol.

3. If the crypto constructions later need to be amended, upgrade the whole protocol.

The anti-pattern is attempt to use a static "outer protocol" with a negotiated and regularly changing "inner protocol" --- that's an architecture we know from experience does not work well.

You know you're in trouble when developers are forced or even encouraged to make decisions between things like RSA and ECC.


From https://news.ycombinator.com/item?id=14292223

> * Flaws in crypto protocols aren't exclusive to, but tend to occur mostly in, the joinery of the protocol. So crypto protocol designers are moving away from algorithm and "cipher suite" negotiation towards other mechanisms. Trevor Perrin's Noise framework is a great example: rather than negotiating, it defines a family of protocols and applications can adopt one or the other without committing themselves to supporting different ones dynamically. Not only does JWT do a form of negotiation, but it actually allows implementations to negotiate NO cryptography. That's a disqualifying own-goal.

My proposal replaces the joinery with "select a version". You don't get to mix-and-match primitives. You won't fall into the trap of Reasoning By Lego.

> Centralizing implementations of cryptographic code into a few, well-vetted implementations is better than just throwing our hands up and telling the community to fend for themselves.

I'm not throwing my hands up and saying "fend for yourselves". I've outlined what needs to be changed to make it secure, and said I'll write a formal spec when I have the time, as keeping a roof over my family's head takes precedence over doing a lot of thankless unpaid work.


This is a much better point (or perhaps a much better way of stating it) than what I got from your original post.

I agree wholehearted that you don't want to give the end-user mix-and-match primitives. But I don't think that JWT really gives you that, at least at the level I think you're discussing. For example, JWT doesn't let you choose an asymmetric cipher and a hash algorithm; you have to choose a precomposed whole, such as "RS256" (RSA w/ SHA-256) or "HS256" (HMAC w/ SHA-256). To me, this seems equivalent to NaCl, in a sense. In JWT, I must choose one of "RS256", "HS256", etc. In NaCl, I must choose one of the crypto_* functions. Are these not both giving the user equivalent choices between equivalently pre-composed functionality? (Are you simply saying that the JWT standards offer too many choices between essentially equivalent cryptographic combinations, such as multiple choices for HMAC or RSA, and/or that you disagree w/ the exact combinations offered?)

> I've outlined what needs to be changed to make it secure, and said I'll write a formal spec when I have the time

Perhaps it's what you're leaving unsaid, but the gist I get from your proposal is that you're discarding the entirety of JWT's claims section, essentially equating a JWT with an authenticated and potentially encrypted message; i.e., the output of the NaCl functions that you present.


So, as suggested above, come up with a standard that encodes that subset – "Iron-JWT" or what have you.

Much easier for adoption to use a "better version" of the widely-used tool you're already using than some newfangled thing that a guy on HN said was more secure.


I've previously made my proposal to the JOSE IETF mailing list. The participants just held their nose up to it.

https://www.ietf.org/mail-archive/web/jose/current/msg05621....

https://gist.github.com/paragonie-scott/c88290347c2589b0cd38...

When I'm not dealing with client work, I'll write a replacement for JOSE that has the properties I outlined above. Until I find the free time for this, things that increase my income take precedence.


Notice that they did not include JWT in this!


JWT uses JWS, JWE, or both. Any criticism that targets JWE and JWS is necessarily relevant to JWT.

The problems with JWT being addressed are in the domain of cryptography designs, so it's natural to criticize the cryptography components.

The other problem with JWT is how people use it: http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...


The point however is that JWS/JWE is not as flawed as JWT is.


Yes it is. Most of the problems people point to with JWT exist in JWS or JWE.


>JSON Web Token

Thanks, I didn't know what JWT meant. When I first read the title I thought it would be about the Google Web Toolkit (GWT) which I incorrectly remembered as being named Java Web Toolkit (JWT) because with GWT you use Java.

Author would do well to edit their post, changing the title to "Things to Use Instead of JSON Web Tokens" and replacing the first occurrence of "JWT" in the body text with "JSON Web Tokens (JWT)".

In addition to helping visitors from HN, this change could also improve the possibility for others to find the text in the future when they search for JSON Web Tokens using a search engine.


To make it more fun, people say JWT as "jot" which sends me into a blind rage.


I can imagine. Unfortunately, "jot" is the formal pronunciation of JWT…

Makes me glad me and my colleagues are Dutch; we just pronounce the names of the three letters in Dutch — yaywaytay.


I prefer that pronunciation too!


Let's try "Ja'wut". As in "'thentikayshun? Ja, wut am I gonna use for that?"


I guess the general problem with security is that developers use libraries (and frameworks) so as to not have in-depth understanding of the functionality that library provides. That becomes an issue when the library doesn't provide sane defaults or too many options, i.e. ways to shoot yourself in the foot by choosing bad options either by own choice or by programming error – relying on malicious client input.

Libraries that provide one solid hard-coded implementation based on current best practices are a much better idea for most projects, but of course what then with giant specs like Oauth and OpenID Connect...


OP literally just deleted a bunch of files from the repo and then went out of his way to break compatibility with normal JWT HS256 tokens by changing the typ and removing the alg argument. If you wanted to pull out other algos from jwt-go, deleting their implementations would've been entirely sufficient. There is a map that holds a alg-string to implementation that would've ended up with only HS256.

Instead OP made sure these tokens can't be used outside their fork, and thus only in Go.

ISHYGDDT.


> ISHYGDDT

For readers like me, this means "I seriously hope you guys don't do this."


Thanks for the information, I find that these acronyms may increase the bandwidth of communication, but the worse quality of the prose and dependence of context that is not always obvious makes this pattern very annoying.

I've read this one specifically many times without knowing what it means, whereas the author may think that an interested reader would look up it's meaning, in my case I rejected what I read and moved on.


ISHYGDDT is 4chan slang. It's deliberately obtuse. You're not supposed to look it up, you're supposed to LURK MORE until you figure it out from context.

Even if you do know what it stands for, that's still only half of the true meaning.

EDIT: By which I mean this sort of language is not a good fit for an audience that doesn't follow 4chan memes.


Yeah, let's please keep this childish nonsense off of HN.

    > Even if you do know what it stands for, that's 
    > still only half of the true meaning.
I cringed.


    >I cringed.
Thank you for your valuable contribution.


Working as intended.


> Only if you completely bungle the implementation on the server-side.

Did you even read that article?

> You could imagine excuses being made about the people who died, or lost their fingers or hands; they were inattentive, they weren't following the right procedure, bad luck happens and we can't do anything about it, etc.

"You only die if you completely bungle the coupling."


> "You only die if you completely bungle the coupling."

That analogy would only really make sense if JWT was always a risk unless the developer set it up properly. But the reality is that the majority of JWT libraries are up to date and don't contain the implementation flaws mentioned.

We could tell users to never use a specification because sometimes they might use a library that wasn't implemented right. But there are legit usecases for JWT that the alternatives don't 100% cover and the risks are entirely manageable.

So I agree that instead the advice should be "use responsibly", which means use a popular, up-to-date, and well vetted library. All web security authentication options comes with advice like this - just look at OWASP.

Unless the state of JWT libraries is terrible, this isn't a big thing to ask. From my experience in two languages (Ruby/Node) there have been high-quality JWT libraries available where the maintainer has stayed on top of known implementation issues with JWT.

Eventually JWT libraries in each major language will mature (if they haven't yet already?) and the risks will be quite minimal. The industry (train operators) aren't ignoring the risks here and letting people die... it seems to me that the library authors are adapting.


one of the Node libraries has a TODO that says "validate X.509 certificates"

the client gets to choose what algorithm they want to use


This is so annoying, because one person implements a shitty JWT library the whole spec stupid and should be avoided? Tell that to Amazon, Microsoft, Facebook and and implementer of OpenID Connect. Somehow, they've all figured out use cases where JWT makes sense and it hasn't been a security nightmare.

JWT.io has a comprehensive list of not shitty implementations of JWT. It's literally the first page I see when I google JWT.


point to similar errors being made routinely by Nacl or libsodium implementations, and I'll give you a pass here


JOSE/JWT is a standard for a wire protocol encoding and format.

NaCl/libsodium are single libraries with FFI wrappers in various languages.

Comparing the two is like trying to decide on the quality of the DirectX vs. OpenGL APIs, by the quality of their implementations. If you've only got one library (DirectX, libsodium), and you're throwing a lot of users at it, it's probably going to be pretty solid whether or not its API is designed well. If you've got a lot of implementations (OpenGL, JOSE/JWT), you'll probably get some crap implementations, just by the fact that there will be "core implementations" that get used+tested in production, and then "edge implementations" that were just written to scratch one guy's itch and have no battle-hardening.

You can argue, separately, that anything with crypto elements like JOSE/JWT has should be absorbed into some high-level crypto abstraction framework like NaCl/libsodium, so that it can also benefit from there just being one high-level implementation of it.

But JOSE/JWT is a lot more than crypto; for one thing, it's also JSON. Do you want there to be a JSON parser in libsodium, the way there's an ASN.1 parser in OpenSSL? This is the path that lead OpenSSL to its current "everyone uses it for everything but it's so bloated that you can flip over any random rock and find a bug" state.


it sounds like you're trying to suggest that the idea being proposed by the JWT specification is not good


Never read such nonsense masquerading as an authoritative piece of information.

Author clearly doesn't appreciate that the ultimate truth of any authentication scheme is you should not trust anything from the user. So what if you take away a clients ability to specify what algorithm a piece of data is signed or encrypted with - if you blindly just accept whatever a client did and proceed then you're always gonna find yourself vulnerable.

Intel didn't use JWT tokens or pass a client header - they just trusted what the client sent and landed themselves in the same mess. I make the point to illustrate that what real security pros do is make sure basic checks like validating user input is done regardless of what specification or algorithm is involved.


Security is half a technical problem and half a usability problem (for developers). We need more emphasis on the usability half.

For example, we should be doing everything we can to ease the mental burden of writing secure code. Whenever we can reasonably eliminate the possibility of a vulnerability, we should. Even if someone has to be an idiot to make the mistake, just don't make it possible to make the mistake in the first place. We should also try to reduce the amount of potentially-malicious input, reduce the amount of options and special cases, etc. Simplify.

Usability improvements pay off multiple times. They make developers' jobs easier because there's less code to write and the code that does get written is easier to reason about. They make security auditors' jobs easier because there are fewer "dumb" mistakes to check for, and that means more of the audit time can be spent looking for deeper flaws.

(Nitpick about the phrase "validating user input": The user's input should never be trusted, but that doesn't mean we should write code to try and decide whether the user's input is "safe" or "unsafe", as that can be impossibly-hard depending on what's happening after the validation check. The code should just be secure no matter what the input is.)


> Usability improvements pay off multiple times.

There's a bit in Google's paper about the Chubby lock service where they note that a big reason for the success of that project was all the concessions they made for developer usability.


Even his history of train coupling was lopsided. I know nothing about the subject, but this:

> Still, the railroads stuck with them because link-and-pin couplers were cheap. You could imagine excuses being made about the people who died...

You have to think that every train coupling on every train at the same time is a pretty big and very real excuse. No single operator could change first, unless they were a near monopoly. I'd guess that railroad companies weren't big fans of their trained employees getting crushed.

So tired of the trope that big evil corporations put zero value on the lives of their employees. Fact is, they value them as much as the replacement cost + cleanup cost + downtime + bad PR, which in many cases is pretty high. Also, sometimes the owners actually aren't evil monsters.


They got $50 million from the government as a carrot to switch


These points are covered in the article, and the library I wrote makes it impossible to parse a token without performing verification with a secret key


Yes I read. However, you've failed to grasp a few things:

1. The client should not get to 'choose' which algo it uses, it merely states what algo it did use. Emphasis should be on the server to verify that the client didn't do (or claim to do) something bad.

2. What happens if in 5 years time the algo you've standardised on is suddenly comprisable? You've ripped out the mechanism that updated clients could theoretically use to signify that they're using a newer algorithm. Take for example Git and its recent SHA collision issue.

3. By emphasising that 'clients should not get to choose', you're making the case that the choice is of itself a weakness it is not. The weakness is often in how inputs are processed (and far more often than any weakness in algorithms).


I point out numerous cases where the server made a mistake in verifying the algorithm specified by the client. This is the point of reducing the complexity of the interaction

If you are using JWT and someone breaks SHA2, you still have to worry about downgrade attacks. To evade downgrade attacks, you'll have to detect the protocol the client sends, and reject tokens that specify SHA2 or below. Or, roughly the same position you'd be in with one good algorithm: still in need of a backwards incompatible upgrade


This concern was always supposed to be handled by algorithm whitelists. The _intent_ was that services would consult a whitelist of algorithms before accepting a particular JWT.

The idea was certainly not that services blindly accept any token from every algorithm shipped in the library.

Your browser supports TLS 1.0, should you throw it out?

> If someone breaks SHA2...

JWT has a built-in mechanism for handling this: expirations.


Expirations are not really relevant to this. It won't prevent a user from forging new tokens with a different expiration (using a broken algorithm), nor will it somehow magically make the original token unreadable.

The expiration is just an additional value in the payload that the implementation is supposed to check against.


There's no "better TLS" out there for shipping a tool that connects to millions of different servers. There are many better options than JWT.

Unlike JWT, hundreds of the world's best security engineers at various browser companies are working on mitigating the situation as well as possible.


Indeed, however, consider that servers can and do limit which TLS versions and cipher suites they accept.


That's how I always implemented JWT, but I'm pretty sure the lazy implementer would get that wrong. I'm with you that a protocol should include some versioning mechanism, but it probably shouldn't have as many knobs as JWT has right now.

The decision to include RSA was misguided, and there's no point in letting users choose the size of the hash or whether to use key-wrapping or not (in JWE - the answer is always use key-wrapping with GCM when in doubt, 96 bits nonces are not long enough if you send many messages). There are other confusing knobs which shouldn't have been there.

It's best to go the NaCl way and choose 1 algorithm considered best-in-class for each type of crypto, and leave space for adding new algorithms in the future when the old algorithm becomes compromised. Cypher breakdown events like Shapocalypse don't happen out of the blue. We usually get gradually improving attacks at least 5-10 years before that breakdown happens (SHA1 had weaknesses found as early as 2005).


Literally, someone has created poor implementations of TLS and SSL that have caused massive security issues for the world, (heartbleed, poodle, freak, beast) but you seem to have glossed over that detail and recommended it anyway.


It's deployed onto billions of browsers already, and the only way to talk to e.g the Twitter API. We don't really have a choice about it. But if you want you can read tptacek on it: https://gist.github.com/tqbf/be58d2d39690c3b366ad


Side note: that's a cool list


I maintain a spreadsheet of the pros and cons of various authentication techniques - https://docs.google.com/spreadsheets/d/1tAX5ZJzluilhoYKjra-u...

JWT is extremely useful when you want a one-time use token to pass a claim to another system. For example, employee portal generates a link for a user to check their available leaves in the HR system. Since the user is logged in to the employee portal, they shouldn't have to login to the HR system. JWT is great for this use case. But for general sessions management, there are better solutions.


I don't actually believe JWT is useful for that, and indeed recommend something simpler like OAUTH2 (OpenID Connect) simply because it actually handles this use-case, and because it doesn't require any cryptography or anything complicated for a (junior) developer to consume correctly.

EDIT: I just saw your spreadsheet. Perhaps you have gotten a bad impression of OAUTH2 by looking at some client libraries. Many of these are made complicated by trying to handle all aspects of OUATH2 instead of the single-sign-on flow, which is simple enough you shouldn't even need a library[1].

[1]: https://aaronparecki.com/oauth-2-simplified/


OpenID Connect is based on JWT: http://openid.net/specs/openid-connect-core-1_0.html#IDToken

If JWT is too complicated and confusing, OpenID Connect inherits all that complexity and then adds some more.


OAUTH2+OpenID Connect does not require a consumer or producer read, parse, or produce a JWT for the single-sign-on flow.

The authentication event is a regular JSON object.

There is no need to validate it since it was received from a server-to-server TLS-protected HTTPS request.

This is not anywhere near as complicated as using JWT for session storage directly.


Well, if you're using authorization code flow, trust your TLS certification authorities enough, then yeah, it's probably safe, though you'll be definitely violating the spec: http://openid.net/specs/openid-connect-core-1_0.html#CodeFlo...

But if you go as far as not verifying the ID Token for, what do you need OpenID Connect for? Just use plain old OAuth 2.0.


There are a lot of convenient things to lift out of OpenID Connect, for example:

* The Discovery URL

* Standardisation in the subject name

* The userinfo endpoint


Sort of agree that the example I provided isn't the best use case for JWT. Perhaps a better example is email verification - you send a link in the email with a JWT.

re. OAuth2 - I wrote the spreadsheet from the point of view of a developer of REST APIs. As a developer exposing APIs, OAuth2 is only useful if I want my users to decide what data they want to share with third party developers.


> Sort of agree that the example I provided isn't the best use case for JWT. Perhaps a better example is email verification - you send a link in the email with a JWT.

Have you seen how Steam performs email verification as part of a 2FA successful login flow?

For password-recovery flows, you may still want to log (audit) attempted password recovery, which means a database hit anyway. From that perspective, "magic link" is good enough.

> OAuth2 is only useful if I want my users to decide what data they want to share with third party developers.

When you have logged in with an OAUTH2 provider, you're given a token which can be used against the API in any of the mechanisms you describe if your users are likely to do more than one request. Even if you only ever authenticate over your own OAUTH2 provider, this might still have advantages involving audit, reliability/availability, and so on.

One thing that I find very useful is using OAUTH2 to hand over authentication to my customers; they want to have their own password policy, and their own two-factor system, and so on. I don't want to implement that for every customer. Even when I implement a OAUTH2 provider for each customer I can do this easily, but nowadays, they might be able to use Amazon Cognito, or Azure to log into my API.

The first time someone wants to consume my REST API from within their dashboard, I'm already ready for this: I can build an SSO between their dashboard and my OAUTH2 provider (or just use theirs!) keeping everything separate from my actual business API, and the customer feels like this was a massive customization.

And so on.


Good work with the spreadsheet!

It states about Stateful Session cookie that it's "supported by all web frameworks and browsers.", but Flask is an example of popular web framework which doesn't support them. The "session" in Flask actually uses Stateless Session cookies.


Yes, I went a little too far with "all web frameworks". I just changed it to "most web frameworks".


Suave.io also uses stateless session cookies; storing all state in the cookie itself.

Also, what about Hawk? https://github.com/hueniverse/hawk


The use case you mention is exactly the reason, why we have SAML2 and similar SSOs.


Except if you use saml, now you have xml digital signatures. If you thought jwts were bad, take a look at the specs for xml digsig sometime. You can specify multiple signed portions of a document, have multiple sigs, or choose to sign only a part, use a bunch of different algorithms, specify your own canonicalization rules, and you get all the usual fun of xml parsing risks.

If jwts are bad, xml digsigs are "literally cthulu".


I was involved tangentially with a project where there was a SAML2 based federated login to be put in place and I remember there were 3 conference calls before the devs implementing it even understood the flow. I don't even think the guys on the other end understood it properly.


It appears to be much, much simpler to just integrate JWT into your system than deploy a SAML identity provider. I would love to see an easy to deploy decent SML IdP, but so far I have not found one. If anyone has any recommendations I would love to hear them.


They dont exist! IMHO ADFS is actually the best of a bad lot. Your friendly Windows Admin can setup the SSO in a matter of minutes without having to know the spec inside out and upside down.

The other platforms I've used or integrated with - Tivoli, Layer 7, Ping Federate, a huge hack job written in PHP - all took weeks/months to get working.

That said I haven't tried Spring SAML recently, so maybe that is painless now. But probably not


I've been happy with SimpleSAMLPhp in production for about 6 years now. Configuration was very straightforward and the whole process was order-of-magnitude simpler than any other SAML IdP we tried at the time.


Keycloak (http://www.keycloak.org/) is quite easy to deploy.

For our usage, even that was overkill and we are using Ipsilon (https://ipsilon-project.org/), with IPA backend. It is more quirky, docs are scarce, but it works for us.

On app side, it is mod_auth_mellon.


Shibboleth should cover the "decent" side of your question, given it's pretty heavily used in academia. The SP side of things is fairly straightforward to setup; whether the IdP counts as easy to deploy is probably a matter of opinion and experience.


If you're looking for a self-hosted solution I can't help but as far as a provider goes we've had decent success with Okta. Of course SAML kind of sucks no matter how you slice it and Okta seems to think the future is in standards like openID connect.


That's an excellent overview. Thank you for writing it.


There's a lot of conversation on when should I use JWTs and when not to.

But as alternatives go, has anyone tried using Macaroons?

They've been mentioned in HN a few times but I've never seen the tech catch up, even though it looks like a very cool way to implement authorization.

Is there any particular reason why?

Here are some resources I've found and I think Macaroons is a very interesting concept. Even the paper is accessible to someone like me, without any real security or cryptography expertise.

Research paper: https://research.google.com/pubs/pub41892.html

A new way to do authorization: http://hackingdistributed.com/2014/05/21/my-first-macaroon/

(This link has an invalid HTTPS certificate if that concerns you): https://evancordell.com/2015/09/27/macaroons-101-contextual-...

After I found this, I've always wanted to try it out but haven't had the chance. Does anyone has any experience or comments about it?


You can use the http version until I get some time to setup letsencrypt (feel free to disable javascript/css/etc).

I've used macaroons in several settings and highly recommend them. The only thing really missing for "wider" adoption is something like a standard caveat language, the lack of which keeps macaroon use pretty localized to your specific deployment.


Thanks for the input!

I'm still hoping the tech catches up somehow and then there's probably going to be more people interested in a standardized way to define a caveat.

When I was toying with Macaroons some time ago, I thought that I could use JSON to format the caveat and toyed with the idea of defining a set of useful caveats, but never got too far.


It seems that most people have those thoughts when they play around with Macaroons :) Check out the google group for some discussions: https://groups.google.com/forum/#!forum/macaroons

If it wouldn't doom them to obscurity, I'd personally like to see a set of vocabularies that macaroon users can use (a la semantic web). This is probably as simple as namespacing caveats with globally unique identifiers to indicate the vocabulary.


Yes, exactly! I actually did read the whole forum when I found out about them, and that's where I got the idea actually. Not that I went much further than what was discussed, but it definitely planted a seed on my mind...

Re: Vocabulary

Something like that sounds reasonable. I would just maybe add some sort of SET operations logic that you could use to build more complex stuff.

Thus I would be able to delegate to someone a Macaroon that has admin access to all of Project P EXCEPT (or MINUS) access to Project P's children in list L. Or maybe delegate with access to any object in List (L1 AND L2). Sort of like a SQL query (to an extent).

At least that was the use case I needed back then, i.e. How to secure an API in a hierarchical way, such that certain user roles can access only certain children of a certain parent node.


Wow, what an interesting idea it looks like macaroons resemble HS256 JWTs (no asymmetric Crypto) with claims encoded as strings (except key-value pairs of JWT) that are interpreted by the verifier (you can build more complex claims like login time between X and Y). On the surface it looks like JWT and Macaroons are very similar.


Except that you can't change a Macaroon header and convince either the recipient of a token or the server verifying it that it is instead a NIST P-Curve public key attestation, so they're really only similar if you ignore the many problems of JWT.

It's the nature of a grab-bag metastandard that it can be twisted into a poor version of any related standard. That's the problem with standards like these.


I'm not familiar with either the details of how JWTs or Macaroons are implemented, that's why I'm interested in a cryptographer or security expert comment on this.

Do you think that Macaroons are a sane alternative to JWTs? or in general, any comments on using them for authorization instead of OAuth2 or any of the alternatives?

Edit: I think I do understand that some of the problems with JWT is that they have a lot of moving parts so the attack surface is much greater as well as not helping the developer avoid getting shot in the foot.

Macaroons by comparison seem much simpler to my untrained eye, in that they are "just" a chain of HMACs, so that any modification of the different sections of a Macaroon would render it invalid.

So why would Macaroons be or not be a suitable alternative to JWTs or other auth methods?


Big caveat: I'm an author of the Macaroons paper.

Macaroons and JWTs are two fairly different things. JWTs are a combination of two things: 1. A standard encoding for authenticated JSON objects, and 2. a fairly ad-hoc standard set of field names.

I say ad-hoc because it's pretty much up to you what you actually put e.g. in iss, sub, jti, etc., and most of them are optional. So applications of JWTs still have to make those decisions (e.g.: If you want to convey that a JWT is issued by a particular key, do you (a) put the hash of the key as the "kid" in the header, (b) put the hash of the key as the "iss" claim, or (c) neither or both. The answer seems to be whatever you want.)

So at the same time, JWTs tie your hands by deciding encodings and identifiers for you (e.g. bytestring valued claims must be base64 encoded into strings, which then gets base64 encoded again for signing (!), keys should be JWKs, ...), but also don't actually make the important decisions for you.

Macaroons (as described in the paper) are more abstract. There is no standard encoding, or registry of hash algorithms. And indeed there is no standard language for caveats (aka claims), since that is entirely application specific. So.. macaroons make no decisions for you at all.

The main point of macaroons is delegation: If you want to pass some authority to someone and let them pass on a subset of that authority to someone else, Macaroons do that well. JWTs don't.

As a consumer/verifier of macaroons, they allow you (through third-party caveats) to defer some authorization decisions to someone else. JWTs don't.

If you just want to protect the integrity of a cookie, or an OAuth token, and nobody but you, the issuer, should touch it, then you just have to sign it - so macaroons and JWTs will both do fine. JWTs have the advantage of fixing some of the details for you.


Thanks for your feedback!

If I understand correctly Macaroons would be very suitable for example, to build a framework for a Single-Sign On service, such that an Auth Server mints the Macaroons depending on certain access policies and then whatever services need to be secured behind the SSO can implement a verifier that consults the Auth Server for the access policy and then grant or restrict access to a request with an attached Macaroon.

Then if I decide to delegate access to someone else (or e.g. to myself in another device), I can attenuate it according to some specified parameters (by time, by allowed operations or access, by device, etc) and then send it over. Then whoever I delegated access to could in turn do certain restricted requests to the services behind the SSO without even going to the auth server to get an access token again (as long as the attenuated Macaroon is valid).

Is this correct?

Since I discovered Macaroons I've been wanting to fiure out how well would they fit to build an auth server to restrict access to an API by user roles, requested instance or even API routes for example. This way implementing things like "Share" links should be easier/safer as well as hierarchical access to an API (i.e. if I can access route X I can also access routes below X, or not, etc).

Off topic: And THIS is why I absolutely love Hacker News. After asking something I got a response from both an author of the paper and the author of the blog post where I found said paper. This community is just amazing! :)


> Is this correct?

Yes. While SSO is best implemented with a 3rd party caveat, you can also do it with simple delegation: The final verifier, i.e. the server that controls access to the resource, mints the initial macaroon and gives to the auth server. The auth server hands out time-limited attentuations of that to authenticated clients (who can further attenuate and hand to others).

The downside is that the auth server has full unlimited access to the resource, so compromising that "master" macaroon would be catastrophic. With 3rd party caveats, the resource owner can make a pact with the auth service (or multiple ones) such that a sign-off from an auth service is needed, but not sufficient, to access the resource.

> Since I discovered Macaroons I've been wanting to fiure out how well would they fit to build an auth server to restrict access to an API by user roles, requested instance or even API routes for example

Ultimately this boils down to defining a caveat language to describe these policies and building an evaluator for it. The paper has some guidelines for writing evaluators, but the gist of it is: Verify the integrity first. Make sure evaluation of each caveat can only ever restrict what's already allowed, and never escalate.


It is a suitable alternative, where 'suitability' depends on your needs. For simple session management, it doesn't offer advantage over encoding all your claims as JSON and adding an HMAC-SHA256 MAC. But macaroons come to shine when you need to delegate authority to third parties. Let's say you are logged in to your work calendar and wants to give a third party service permission to read your busy time (but not any other appointment data) for the next 2 months. You take your current macaroon and tack 3 caveats on it: - third_party_service = busy_timer - date_range = 2017-05-01...2017-07-01 - permissions = start_time/read, end_time/read

You can do everything on the client side and you don't have to mess with OAuth.


if only someone would write an article documenting sane alternatives to JWTs


Maybe I wasn't clear, but I was referring specifically to Macaroons vs JWTs, i.e. Macaroons as an alternative to JWTs. I read your article but didn't find any reference to them, that's why I asked.

Edit: Sorry if I came across as dismissive of your article somehow.

What I meant to say is that some comment from a security expert, specifically on the Macaroons concept/implementation would be great, because to my untrained eye it looks very nice and secure but then again, I'm not an expert and thus can't trust myself on that.


You say that the standard is bad and that indirectly caused bugs in JWT implementations but looking at Macaroons examples there are still some corner cases where a programmer can make mistakes.

For example this piece of code (fragment taken from [0]), restricts the Macaroon usage to given account... Or does it?

  M.add_first_party_caveat('account = 3735928559')
Only someone familiar with the topic will notice that it doesn't add anything to M as Macaroons are immutable but instead returns a new, adjusted object (the same "issue" happens in Java with BigIntegers). If you know what you are doing you won't make this mistake but in this case you would also have safe JWTs...

As far as I can see Macaroons have interesting ability to be adjusted by intermediaries to limit their scope. Say you have Macaroon that gives access to your Gmail account you can "attenuate" it to limit scope only for emails in the next 10 minutes without contacting third party. That'd be very useful for OAuth like flows...

[0]: https://github.com/rescrv/libmacaroons/blob/master/README


Almost all of the 'alternative' use cases mentioned by the OP are not what JWT was designed for. JWT was designed to authenticate users/entities by giving them a token which contains basic non-sensitive metadata about that user/entity. It's for authentication not for authorization.

Sure, some implementations of JWT have had bugs in the past, but this hasn't been an issue for quite a while and it's definitely not an issue with the RFC itself. It's the same as if you blamed the TLS/SSL RFC for being responsible for the heartbleed bug in OpenSSL - It makes no sense.

>> You might have heard that you shouldn't be using JWT. That advice is correct - you really shouldn't use it.

This type of blanket thinking is dangerous. There are cases where JWTs are practically necessary and unavoidable. Whenever an extremist blanket idea like this catches on in this industry, it becomes a major pain to have to explain to people over and over why in THIS SPECIFIC CASE it is actually the best solution possible.


> It's the same as if you blamed the TLS/SSL RFC for being responsible for the heartbleed bug in OpenSSL - It makes no sense.

It makes some sense.

Features are attack surface. Each extra feature or option your protocol enables is more code you need to manage. So careful decisions need to be made: just because you can easily specify a feature for many use-cases in your protocol doesn't mean you should, because once you spec it people might just use it.

For heartbleed, for example, why was the TLS Heartbeat extension ever specified for TLS over reliable protocols? It serves no purpose: TCP has TCP_KEEPALIVE if that's a thing you need. But it was specified, and because it was specified it was implemented, and then it became attack surface that needed to be protected. It wasn't. So I guarantee to you that if RFC 6520 had been more restricted in scope, the Heartbleed attack would not have happened (or would have been a much more minor story, I can't remember if Heartbleed affected only the TLS and not the DTLS implementation).


I disagree with most of his points, but what really bothers me is the underlying message of his article. I don't think we should fool ourselves into thinking we can design a protocol that isn't susceptible to implementation problems.

Did we get SSL/TLS right? No, we've failed a few times, but we're doing better. Why is that? People are paying attention more that ever to OpenSSL and now we have competition with LibreSSL and the like.

Side note: The irony of his coupler example; it's a flawed spec, not flawed implementations... the implementations followed the spec but the spec was dangerous from the start. JWT is the opposite: good spec, but a long time ago some early adopters didn't get things right.


I guess heartbleed gave everybody the wrong image, but most TLS vulnerabilities were vulnerabilities in the spec. The fixes to RC4, BEAST, SWEET32, POODLE, LOGJAM, CRIME, BREACH, BRICK, BORKED, BUSTED and whatever else you could think of - all involved changing the spec. Sometimes by adding mitigations, but mostly by banning TLS features.

TLS is a legacy protocol. Nobody likes it very much, but it's such an established standard that you don't have much choice than trying to fix it. The same might happen to JWT, but in both cases we'll be fixing the specs, not just the implementations.


You could look at SMACK TLS as a few dozen examples of implementation specific features due solely to the complexity of representing TLS state machines. These aren't protocol weaknesses, they're weaknesses in the implementation due to complexities in the protocol.


TLS1.2 seems to be pretty secure... I think upcoming TLS1.3 will be very secure, and it's supposed to decrease latency.


I've implemented some of OpenID Connect, "a simple identity layer on top of the OAuth 2.0 protocol". Combined with OAuth 2.0, it's one of those giant corporate specs (giant at least for security/crypto purposes) with way too many options so anybody can do anything with any algorithm and any kind of workflow.

I wanted to go with a simple, minimal NaCl-based system but in the end did implement a lot of OpenID Connect in the hope it would make interoperability easier with existing client libraries. I don't want to write a client for each and every programming language other people in any related projects would want to use. That in my opinion is the value of something like JWT: you can tell people up front that that's the way they're going to get the user's ID data, no matter how much server or client implementations will be switched around.

I feel that when faced with a spec like OpenID Connect and OAuth 2.0, it's not necessary to implement more than what is strictly needed. If you don't need all the flows and algorithms in your project, don't implement and don't accept them. The parts you implement should comply – why base it on a spec if you throw any interoperability out of the window – but don't waste months of trying to correctly implement all of a huge corporate spec if that doesn't make sense for the size of your project or organisation. Complete implementations might have value only if that's your main product and you need that line to sell it.

I use JWT only as ID, not as session, allow only one server-chosen algorithm for signing, and rely on TLS for encryption.

There's clearly a need for up-to-date "web-approved" standards to pass crypto-friendly data structures around – or maybe I'm just not familiar with any recent efforts. Normalising and serialising JSON is pretty error prone...


> implementation errors should lower your opinion of a specification. An error in one implementation means other implementations are more likely to contain the same or different errors. It implies that it's more difficult to correctly implement the spec.

Discussion on cryptography and particular implementations aside, I think this is sound and I normally follow this when judging technologies.


In cryptography, we have a concept of "misuse resistance". Misuse-resistant cryptography is designed to make implementation failures harder, in recognition of the fact that almost all cryptographic attacks, even the most sophisticated of them, are caused by implementation flaws and not fundamental breaks in crypto primitives. A good example of misuse-resistant cryptography is NMR, nonce-misuse resistance, such as SIV or AEZ. Misuse-resistant crypto is superior to crypto that isn't. For instance, a measure of misuse-resistance is a large part of why cryptographers generally prefer Curve25519 over NIST P-256.

So, as someone who does some work in crypto engineering, arguments about JWT being problematic only if implementations are "bungled" or developers are "incompetent" are sort of an obvious "tell" that the people behind those arguments aren't really crypto people. In crypto, this debate is over.

I know a lot of crypto people who do not like JWT. I don't know one who does. Here are some general JWT concerns:

* It's kitchen-sink complicated and designed without a single clear use case. The track record of cryptosystems with this property is very poor. Resilient cryptosystems tend to be simple and optimized for a specific use case.

* It's designed by a committee and, as far as anyone I know can tell, that committee doesn't include any serious cryptographers. I joked about this on Twitter after the last JWT disaster, saying that JWT's support for static-ephemeral P-curve ECDH was the cryptographic engineering equivalent of a "kick me" sign on the standard. You could look at JWT, see that it supported both RSA and P-curve ECDH, and immediately conclude that crypto experts hadn't had a guiding hand in the standard.

* Flaws in crypto protocols aren't exclusive to, but tend to occur mostly in, the joinery of the protocol. So crypto protocol designers are moving away from algorithm and "cipher suite" negotiation towards other mechanisms. Trevor Perrin's Noise framework is a great example: rather than negotiating, it defines a family of protocols and applications can adopt one or the other without committing themselves to supporting different ones dynamically. Not only does JWT do a form of negotiation, but it actually allows implementations to negotiate NO cryptography. That's a disqualifying own-goal.

* JWT's defaults are incoherent. For instance: non-replayability, one of the most basic questions to answer about a cryptographic token, is optional. Someone downthread made a weird comparison between JWT and Nacl (weird because Nacl is a library of primitives, not a protocol) based on forward-security. But for a token, replayability is a much more urgent concern.

* The protocol mixes metadata and application data in two different bag-of-attributes structures and generally does its best to maximize all the concerns you'd have doing cryptography with a format as malleable as JSON. Seemingly the only reason it does that is because it's "layered" on JOSE, leaving the impression that making a pretty lego diagram is more important to its designers than coming up with a simple, secure standard.

* It's 2017 and the standard still includes X.509, via JWK, which also includes indirected key lookups.

* The standard supports, and some implementations even default to, compressed plaintext. It feels like 2012 never happened for this project.

For almost every use I've seen in the real world, JWT is drastic overkill; often it's just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom. For the rare instances that actually benefit from public key cryptography, JWT makes a hard task even harder. I don't believe anyone is ever better off using JWT. Avoid it.


often it's just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom.

This should probably be the first thing anyone thinking of using JWT reads.


> For almost every use I've seen in the real world, JWT is drastic overkill; often it's just an gussied-up means of expressing a trivial bearer token.

This is certainly true for schemes that require trips to a source-of-truth database to authorize a token (c->auth, c->resource, resource->auth). It's also true for schemes where the token is associated with capabilities that are loaded from a database. Using JWT to implement RBAC is flawed.

However, there is a strong use case for the token carrying its own capabilities -- that is, a token that is more than just "20 bytes of urandom".

If a resource service can derive the capabilities associated with a token generated by a trusted authentication service without contacting that service, that has real world implications for lower latency, higher throughput applications that are simpler to compose and operate.

As far the cryptographic credibility, the idea behind capability-based security is an old one, and I'm sure you're aware of the research. This particular spec may be problematic, folks may be misunderstanding and misusing the primitives, but the underlying idea is sound.

Previous HN discussion of CapSec Wikipedia article https://news.ycombinator.com/item?id=10684129

Google Fuscia https://en.wikipedia.org/wiki/Google_Fuchsia


I don't deny that capability tokens are useful; they certainly can be. JOSE is just a poor vector for getting them.

The issue isn't with capabilities, or delegated authentication, or public key tokens, or even standardizing any of those things. I think at this point I've been pretty clear about what I believe the issues actually are.


Thanks for clarifying; I'll reread your comments.

Do you have a standard that you would recommend for any of those things?


found in the article linked at the top of the page:

- crypto_auth, or HMAC-SHA256 by itself, for authentication

- crypto_secretbox for symmetric encryption

- crypto_box or TLS for public key encryption


I was asking about standards for Web-based capabilities, delegated authentication, and public key tokens; not the individual message authentication or (a)symmetric encryption components.

Echoing tptacek's comment above: the problems with using those individual pieces is in the joinery - combining them in ways that are broken.


> JWT is drastic overkill; often it's just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom

Can you point us to an article showing how to implement this for a web app communicating with an API? Lacking crypto expertise and documentation, the average programmer is going to use something like Auth0, and if his users' recipes or jogging history is compromised, so be it.


this is "symmetric cryptography" in the article at the top of the page


"symmetric cryptography" is not mentioned in your article. And I specifically asked for example implementations, which is something Auth0 gives me for just about every platform. If your goal is to have programmers act on this information, the alternative(s) should to be as easy as the implementation you are moaning about. Especially when the claim is that JWT is overly complicated. Show us the simpler alternative, spelled out as well as the Auth0 team spell it out, and maybe us non-crypto experts will bite.


The protocol is complicated. It's evident to everybody that the library implementations are simple, which is why so many people with no background in cryptography have such strong feelings about JWT. We get it: they made it very easy for you to do the wrong thing, and you'd rather things stay easy. I sincerely sympathize with your problem. But it is a problem.


Thanks for recognizing no one (as far as I know) has made it easy to do the right thing. Yet. Since it sounds like doing the right thing might not actually be that difficult, why haven't market forces taken over? Since many are willing to pay for Auth0 services, why isn't there a similar product that isn't built on JWT and that gets the approval of the crypto community? I would pay for it. I bet others would. I want to focus on building my web app and API and the user experience. I don't want to have to worry that I've made a mistake with authentication, which is outside my area of expertise. I would bet that describes a huge number of programmers creating public facing apps.


And it is not like even AES-CBC + HMAC is particularly hard to write. (you should remember to encrypt-then-MAC or even better use AEAD these days)


If you have to append "you should remember" to cryptographic instructions, I would argue they are hard.


sure, here's some code I use in Go to encrypt and decrypt cookies and OAuth state objects https://github.com/kevinburke/go-html-boilerplate/blob/maste...


Thanks for the example, but I don't use Go and don't plan on learning it just to understand how to switch from using JWT. I need an example for a React frontend consuming a nodejs API (that's just one example). I hope you're getting the idea that these many anti-JWT articles are strong on complaining but weak on providing a solution as well documented with examples as the Auth0 team has provided. Flawed or not, JWT is being used because there are parts (like sample code in many languages) of the JWT offering that are better than what's offered by an article complaining about JWT.


In general, implementing non-cryptographically-secure protocols is easier than implementing cryptographically secure tools; there are many ways to set up something insecurely.

I recently set up libsodium for a client running Node.js on the server, and could work on this for you as well, if you want to send me an email, I can send you my rates.


What's your value proposition? Auth0 works for me, has a free tier, and my users' data is not anything so private. On the server side I only accept the HMAC-SHA256 algorithm, negating your biggest concern about JWT. Perhaps, as you claim in your article, that "is not JWT". I'm ok with that. How would your services save me money or increase my profit? I would be far more interested in a freemium SaaS alternative to Auth0 that fixes the concerns with JWT than I would be in paying an individual consultant (who might or might not be a crypto / auth expert - hard for me to verify) and who could be hit by a bus.


I don't know, you asked about how this could be done, and I could build it for you.

My value proposition is I've shipped a lot of useful things for companies, and found security vulnerabilities, and those skills are in demand these days, I guess.

You can read more here: https://burke.services


Well, what I asked for was an article showing how to do it in many languages and platforms. What I got was an offer to hire a high priced consultant. This is one of the reasons JWT and Auth0 are winning. I would love to do things the right™ way, and I appreciated your article. But there is a lot more the crypto community (or someone) needs to offer to make the alternatives to JWT just as attractive as JWT.


If you liked his article, as I did, what are we actually debating at this point? If all you're pointing out is that a single blog post hasn't solved the problem of JWT being promoted as a safe crypto standard when it isn't, well, everyone agrees on that already. Nobody has claimed this blog post to be more than it is: a good blog post.


Are we debating? I was trying to find a viable alternative to JWT that isn't "hire me at an expensive hourly rate". I think that's a pretty reasonable goal after reading yet another "don't use JWT" article, of which I've seen dozens in the past few years.


I only made the comparison with NaCl because the OP did (and used the wrong primitive) and brought up Forward Secrecy as a major flaw in JWT. OP also mentioned JWT libraries ignoring embedded X.509 certificates as a problem if we're at it.

But you need alternatives. NaCl is not alternative, because you need to base a protocol on top of that. Noise is not an alternative because it's not meant for the same purposes. Fernet is the only thing close to be an alternative, but it lacks useful features (for instance, how do you specify a key ID for key rotation), supports only symmetric encryption, has a weird cipher choice, and barely gets any library support.


In fact I do not need alternatives to the JWT standard. Part of my argument is that what JWT is trying to do --- providing one overarching standard for every conceivable token authentication use case --- is simply a bad idea.

Regardless, bad engineering is bad engineering. Bad security engineering gets people hurt. It's not that JWT doesn't do the best job it could do: it's that it's a snakepit of implementation traps that create vulnerabilities. For me, the argument ends there.


In fact, I think JWT is probably a classic example of non-cryptographers combining multiple proposals with different use cases into one standard. The others such as JWS and JWE are not as flawed.


This complains that JWT does not have forward secrecy, and then recommends NaCl's box primitive instead... which does not have forward secrecy either. (This isn't exactly drawn attention to in the NaCl docs for some reason.)


I am using JWT for my projects to keep stateless sessions between servers and for some other tokens (refresh, register, reset pass etc.). Of course extra security measures are required (MitM protection [HTTPS etc.], XSS / CSFR prevention etc.), but this has nothing to do with JWT. I use encryption with a frequently rotated private key to encrypt the part of the payload that only the server may read.

A good read at: https://stormpath.com/blog/where-to-store-your-jwts-cookies-...


The problems with JWT are not related to MITM, XSS or CSRF vulnerabilities. You'll have to address these issues regardless of the type of token you're using.

The problems with JWT can be divided into two classes: 1. Too many options, making it easy to misuse. Even if you disallow the 'none' algorithm (like most newer JWT libraries), there are still many other ways to break it. e.g.: https://auth0.com/blog/critical-vulnerabilities-in-json-web-...

2. Misguided cipher choice. AES-GCM (easy target for nonce-reuse), RSA, NIST P-curves.

So in short, even if you're using encryption, JWT just makes it easy for the crypto itself to fail.


I agree. Thank you for pointing that out. That is why I restricted my JWT code to only accept / use certain options. Of course I could still have chosen the wrong cipher for my specific use case and am aware that JWT will not solve this for me.

What JWT is doing is actually not that special as it is just a standardized container (akin to MKV and supported codecs) inside which existing technologies can be used. Easy to write something similar if you know what you are doing. I did that before, but still missed some extra verifications already build into JWT.

Of course, the chosen technologies allowed to be used inside a JWT can still be prone to vulnerabilities. I am not sure if that can be blamed on JWT. People should still think about which options to use.


This is a case where the high-level point - "insecure options shouldn't be configurable" - conflicts with a reality of crypto protocols: you have to make tradeoffs based on the best known attacks and the platform you're running on, and those change over time. The best hashing algorithm, HMAC algorithm, and signing algorithm to use on a mobile device in 2007 isn't the same as the ones you'd pick today. Any protocol expected to last 10 years should allow for the selection of an underlying crypto algorithm. Maybe "none" should never have been an option - but tying the algorithm to the protocol has pretty severe drawbacks, too.

More broadly - JWT isn't just about the exchange between a single client and a server; the choice of "use cases" misses a very real constraint of multi-party protocols. Within the context of OpenID (or OAuth more broadly), it's about the relationship between an end-user, a resource owner, a client and an authorization server -- all of whom need to be able to interact with a token, and often offline.


This is in fact not true: an HMAC-based design from 2007, even one that used SHA-1, would remain sound today. HMAC hasn't changed in something like 20 years, and works even with MD5 (we've had since the 90s to make the migration from MD5 and I obviously don't recommend anyone continue using it).

To non-cryptographers, this idea that protocols need to be constantly ready to accept new ciphers in case there's a break in one of the old ones is a very big deal, so much so that every amateur crypto protocol design includes cipher suite negotiation as the centerpiece of the protocol. In reality, it's a minor concern. Which the exception of RC4, pretty much every TLS ciphersuite flaw has been the product of flawed joinery and not problems with underlying ciphers --- which is why the major breaches have forced us to move people not from one ciphersuite to another, but instead from SSL3 to TLS 1.0 to TLS 1.1.

For an expert design example, look at Trevor Perrin's Noise framework. An even simpler idea: simply specify a single set of coherent cryptographic constructions, and then version the whole protocol --- if there's a serious break in your protocol, you'll almost certainly have to make changes across the protocol anyways --- and upgrade the whole thing.


I was just thinking. What is a good way to phrase the problem in order to understand whether there are more pros / cons to having the client vs server decide which algorithms to use in a transaction.

I haven't thought this through fully, but as far as I can tell ecosystems on the web evolve. And so I think it's probably a good idea that we architect things for the web in such a way that we don't inhibit that evolution. When you put a decision like encryption algorithm in the client's hands does it feel to anyone else that the security will evolve more rapidly, and thus remain more robust? When the client is deciding, there's a larger pool of people "voting" for what is an acceptable level of security. Even though a lot of those "votes" will be based on the default settings of a library, that library will over time become less popular as more and more people consider it unsafe.

By the same token, if a particular service (server-side) does not keep up with that evolution, fewer and fewer people will use it as other (safer) services pop up.


It doesn't seem like the author is actually proposing an alternative, rather, the only thing says is "Don't use JWT, here is how you can do X with Nacl"

Even worse;

It is suggested that to authenticate a user one should use TLS. That might be true for a login form, but not beyond that. Once you have logged the user in, you need to continue authentication on every request. JWTs are one way to put this information on the client side without having to put any much trust into it.

The second example is a simple asymmetric encryption example which... for some reason JWT is not a solution for? I've used Ed25519 plenty of times with JWT (custom algo header in this case), so I see no problem there plus... I don't think this is what JWT is actually trying to solve.

The third example is encrypted data to the client, which is also something JWT isn't trying to solve, this is what JWE is for. JWT is purposefully unencrypted and I'm not sure how many developers would actually pretend a signature is encryption.

The last part is an actual example of JWT use cases, in which case however the author blabs on about the (in)famous "algo=none" bug a lot of libraries had. I've specifically used the Go library mentioned and strictly enforcing the algo is a nonbrainer if you are using a custom one anyway. On the other hand, I still use HMAC for a seperate token for short-term authentication over endpoints (to make blacklisting logins easier).

So JWT simply gives me some flexibility in sharing common code for authentication, the same code can consume the long-term tokens and the short-term tokens and much more if needed in the future.

I'm not saying JWT is the end all for problems, but it's rather easy ready-to-use solution for some of my problems.

Why write a signature library when there is one ready to use that, with care, is safe to use?


I address these points in the article


"..."algorithm agility" that is a proud part of the specification" (italics mine)

No idea about "JWT" but maybe this is part of the psychology that keeps schemes like SSL/TLS in use. (Part.)

Do you think there are people who are actually proud of achieving complexity in a specification or implementation?


> But the server and the client should support only a single algorithm, probably HMAC with SHA-256, and reject all of the others.

If you have a centralized system dealing w/ authentication, this doesn't work, as now everything that needs to verify JWTs needs the secret. The support for RSA, instead of HMAC, is there to meet a different set of requirement.

What people fail to remember is that JWTs — and the libraries that work w/ them — do not just wrap data to be authenticated. They also handle verifying the various claims on a token — is this token applicable here? is this token expired? — things relevant to an authentication token, not just a mere signed blob of data. The suggestion to leave those to be reimplemented by every single end-user is bad advice.


Maybe not a great time to mention I also found errors in the claims verification in the one library I tested, and ended up ripping that code out as well


What makes this all worse is that companies like Auth0 and Stormpath have flooded search results with self promoting blog posts masked as tutorials. This just makes it harder for developers to learn the basics without getting product shoved in their face.


I don't see a problem with companies writing tutorials and OSS libraries to share with the community. If there is an economic incentive for documentation to be written and free software to be created then what's the problem really?

If the article is blatant self-promotion then I could see the problem with that and those articles don't get anywhere on HN/Reddit/etc so they won't rank well on Google either. But I've seen some high quality docs/libraries coming out of companies like that and it's a great thing.

The aren't responsible for developers doing poor implementations of JWT. And other developers equally have a voice such as the OP if they have a problem with the choice of technology they are promoting.


For the record, JWT is a payload and format spec for JWS/JOSE, which is really what you're complaining about. JWT is merely a claims set and an optional dot-notation serialization of a single signature.

JWT/JWS libraries that handle all the validation alone are treacherous. You should ALWAYS parse the JSON before hand, perform your claims and header(s) validation, and then pass the payload/header/signature. If anything, it's computationally less risky than running the signature blindly through a signature validation function and then checking the header.


you could also choose to use safer libraries and cryptographic primitives


There's nothing a library can add to make things "safer" when it comes to application-specific constraints. If I'm checking to make sure I only accept a specific algorithm, I'm not going to blame a crypto library for validating something that is technically accurate (short of maybe disabling the "none" algorithm by default). You should also be in possession of at least the public key beforehand, and rely on that for authentication, rather than anything in the claim.


The implementations of JWT that I've mostly used... internal signing and validation only... the algorithm and public key are pretty much pinned down. If you do that, JWT is a pretty valid format... it's when you leave things open to "whatever" that the security issues come through.

I don't have a problem with JWT, only in that having the signature method, etc configurable in the first place as an implementation detail is probably a bad idea.


If the specification requires the server to decide which algorithm to use a naive client, who doesn't know which algorithms are safe or not, is just as dangerous.

As far as I know there are no algorithms that exist today that we can guarantee will never be broken in the future. So algorithm choice inherently must be decoupled from the specification.

EDIT: Or a naive server implementation for that matter...


Are there any examples of big JWT hacks which took place due to a vulnerability


Good writeup of what you should be careful with with regards to JWT. There are some inaccuracies there too. First of all, JWT doesn't support encryption at all - it's JWE that does that. It's an important distinction, since most JWT libraries I ran into don't feature encryption, so if you want encrypted tokens you'll need to use an extra JWE library, or a more full-featured JOSE library.

JWE also support more than just RSA - it definitely does support Elliptic Curve Cryptography (although I would prefer if they chose Curve25519 instead of the NIST curves), and they are used with EC Diffie-Helman in a certain construction that actually gives you more forward secrecy than NaCl box. NaCl Box offers you no forward secrecy, whereas all the ECDH-ES algorithms in JWE offer you partial forward secrecy, which saves you when the sender key is leaked, but not when the receiver key is leaked. That's the best you can get: we can't have a two-way perfect forward secrecy in non-interactive protocol like JWT since we can't perform a direct negotiation of ephemeral keys between sender and receiver). Of course, we're actually comparing apples to oranges here: you can get the same partial forward secrecy guarantee with libsodium's sealed box (not in the original NaCl): https://download.libsodium.org/doc/public-key_cryptography/s...

As for an alternative to JWT/JWE, I think it really depends on what people want, but I have slightly different suggestions:

1. For simple access tokens in low-medium load scenarios, access tokens stored in Redis are probably simpler to implement than JWT + revocation.

2. If you don't have any secret information inside the token, HMAC-SHA256.

3. If you have secret information, I'll actually go with libsodium's XChacha20-Poly1305 AEAD: https://download.libsodium.org/doc/secret-key_cryptography/x... secretbox doesn't support AEAD, but you often have external data that you want to tie the token. It's very easy to implement exactly the same construct with XSalsa20, so it will really be just secretbox with AEAD support, but that's non-standard and you won't find any native library support.

4. Public signature of public data with Ed25519 (this is NaCl's crypto_sign).

5. Authenticated asymmetrically encrypted tokens: to get partial forward secrecy, the easiest way would be using sealedbox and then signing the result with crypto_sign. It's not the most efficient way to do this, but NaCl/libsodium don't have a tailored operation for this use-case, so you would have to use primitives directly.

All in all, not quite clear cut, and nobody is making a library that does that for you, so it's easy to see where JOSE is coming from.


He should release "JWT-H2O" i.e. JWT with HMAC 256 Only. :) (left a similar comment on his blog)


One should always validate algorithm before doing anything with JWT on the server side.


This is acronym heavy. Author needs to specify in the opening which JWT he is talking about. Java Web Toolkit? or JSON web token? It took me a fair amount of reading to work out which one he means. And even them I'm only pretty sure he means JSON web token, but I can't be sure because the Java Web Toolkit also connects to servers, and uses encryption, and all the other stuff he talks about.


People still use Java Web Toolkit?

Okay, dumb jokes aside, they're clearly talking about alternative ways to authenticate, so I wouldn't know how you could conclude this was about anything but JSON Web Tokens.


People still use Cobol, mate.

Just specify in the first sentence what you are talking about as full words. Is it that hard? Apparently it is.


Usually people using the acronym daily process it as a single concept not even consciously thinking it's a compound term. Look at yourself: you used "Cobol" instead of "common business-oriented language" [0].

[0]: https://en.wikipedia.org/wiki/COBOL


>People still use Cobol, mate.

Which makes sense. It has been a historically succesful language that served well for decades.

JWT wasn't even good when it was new.


I had the same thought. I thought Java Web Toolkit, and then thought, yeah, I bet there are lot of things to use instead of it, and proceeded to think the article would explain that Java Web Toolkit was more commonly used than I expected.

Nope, JSON Web Token.


sessionstore and give your client a cookie.


JWT is just storing more info than you would with a cookie but pretending it's secure by encrypting it with an algorithm the browser has access to.


JWT are not encrypted, they include an HMAC signature to prove that the token claims (which are a Base64 encoded JSON object) have not been modified.



Oh hey, it's another "JWT libraries used to be terrible" article.

Idiots shouldn't write authentication code. Especially when credentials are involved.

Funny jwt-go was used as an example; it was never vulnerable to the alg-none attack: https://github.com/dgrijalva/jwt-go/commit/2b0327edf60cd8e04...


>> it's another "JWT libraries used to be terrible" article

Dead on. Sure, some early JWT implementations were poor... during the first few months that JWT was gaining traction. That was over 2 years ago. You may as well write an article about how Internet Explorer is awful, while referring to qualities present in IE 6. Disclaimer: I still dislike IE/Edge, but no longer have pertinent reasons as to why I maintain that opinion.

There is not a single valid criticism of JWT from a security perspective. The only criticism outside of security considerations I'd view as valid is that the length of the strings quickly becomes bloated for the amount of information contained within (ie: inefficient bandwidth and storage usage, the same complaint as with long cookie headers requiring more TCP packets).



I cover this point in the article


Huh? The alternatives listed make me wonder if the author actually knows what JWT is being used for.


hey, author is missing the bit where you can disallow client to choose the algorithm. No need to read the rest...


In some way he addresses that towards the end:

> It's important to note that my experiment is not JWT.

> When you reduce JWT to a thing that is secure,

> you give up the "algorithm agility" that is a proud part

> of the specification.

I don't agree with him though, unless the standard requires to implement all of the available algorithms, one may choose to implement only those that he/she deems safe/worth.


>I don't agree with him though, unless the standard requires to implement all of the available algorithms, one may choose to implement only those that he/she deems safe/worth.

Agreed. I view this flexibility as a developer feature, not a client feature.


Correct. Let's say you implement RSA2048 and server-side reject all other algorithms. Then during a security audit the crypto guy points out that RSA2048, while not broken per se, is not up to the generally-accepted 128-bit security threshold. You should use RSA-3078+ or switch to ECDSA. You decide to switch to ECDSA for the space savings. But what about all the deployed clients? Well since it's not actually broken, you continue to accept RSA-2048 for the next couple of years until something else permanently breaks support for old clients. Supporting client-specified algorithms lets you do a safe, phased-in upgrade without breaking compatibility or any fancy engineering.


The spec requires you to implement the "none" algorithm IIRC.


From a cursory read from the specs [1] I can see the following (Chapter 7.2):

> Finally, note that it is an application decision which algorithms may

> be used in a given context. Even if a JWT can be successfully

> validated, unless the algorithms used in the JWT are acceptable to

> the application, it SHOULD reject the JWT.

From what I understand from the above, the server side can decide to _always_ reject the "none" algorithm and still qualify as a valid implementation. The fact that the "none" algorithm is implemented or not by the library becomes a detail.

[1] https://tools.ietf.org/html/rfc7519


Exactly what I was thinking while reading the article.

I'm using JWT for an API, and the server is choosing which algorithm to chose.

Really don't understand why a client should bypass server.


You could even hit 2 birds with 1 stone by going with headerless JWTs (just strip the first segment).


You can run into issues with headerless JWTs when you can't (or don't) guarantee the order of the header. Since the header is included in the signature of JWS objects, you must reattach a header that is exactly the same, and not just equivalent.

For example, both of these decoded headers are equivalent:

{ "alg": "HS256", "typ": "JWT" }

{ "typ": "JWT", "alg": "HS256" }

Obviously, these encode to two different values. If you reattach the wrong one, signature verification will fail.

Disclaimer: I maintain a Python JOSE library and have had to answer questions related to this on more than one occasion.


Keeping the JWT format as-is is useful if you have signed (but not encrypted) tokens though; in a web browser you can use standard libraries to inspect the token and alter the UI based on a user's permissions (the final check is always the API's responsibility of course, but if there is no need to show the 'admin' link the client can do that).


If it isn't encrypted, the only thing the client needs to know is that it's base64 encoded in order to inspect it. You'd need the secret to verify the signing and you probably shouldn't have that on the client-side!

So I still think the header is superfluous even for this use case.

edit: in fact, the client needs to know that it's base64 encoded to even read the header in the first place.


Symmetric signing is not, by far, the only use case for JWTs. Asymmetric signing, and encryption, are also well-specified and supported.


Good point! It slipped my mind.


There's some value in the header. Google use it to store the keyid, which is pretty useful.


The question is: what if you change the algorithm on the server, and a user still has an old token.


Usually you don't care (as in will never happen), but on the off chance you do, you have to do 2 deploys, 1 to add the new thing and another one to remove the old thing.

This is pretty standard for rolling signing keys and api auth methods and all kinds of stuff like that.


So, let's say you're currently using RS256 JWTs, and you want to migrate to ES256. Your JWTs are stored by clients in various places - some of them might be short-term, some long-term, so you don't want to invalidate old ones (RS256 isn't broken yet).

How do you tell RS256 and ES256 JWTs apart - so you can figure out how to validate them - unless the JWT actually encodes that information?

The trick is that JWT APIs need to force developers to choose which algorithms they want - having a `decode_jwt` function is not a good idea, `decode_es256_jwt` is much better. It'd validate that the alg in the header is correct, and return a specific error if it's not - if that error is returned, the developer can try `decode_rs256_jwt`.

This is how I've designed the API used in my OpenID Connect implementation. It works wonderfully.


Or if the old approach is no longer trusted, simply refuse old tokens. Any users with the old tokens will simply be forced to re-authenticate.


Definitely, and it’s how I’d do it as well, but the standard was written for the use case of companies on the scale of Google.

Which still support webbrowsers that were released before 2007. (But somehow, they can’t support Firefox Mobile. mmhhmk. Totally not a plot to get rid of mobile ad blocking.)


One of the advantages in using JWT is that the verification is in the cpu level and not a lookup via disk. This may sound strange, but if you don't have a quick lookup cache or centralized cache, This would be a great advantage to use CPU.


This is not a cryptographically sound argument


So we should stop using cell phones because one implementation happened to spontaneously combust. Got it.

Or maybe - just maybe - the claim that specifications should be judged by their implementations is entirely nonsensical. The fact that Internet Explorer exists doesn't in and of itself mean that web browsing as a whole is a defective concept. The fact that single-ply toilet paper exists doesn't mean we should stop using toilet paper. Likewise, the fact that some JWT implementations are defective does not in and of itself say a whole lot about JWT itself.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: