Only if you completely bungle the implementation on the server-side. The 'none' 'algorithm' isn't supported by up-to-date JWT libraries with a good track record, and you should always limit the algorithms you'll allow on the server. So if you sign your tokens with a RSA-2048 key-pair, you would discard any token that isn't using that algorithm.
Of course if you are building an API that blindly accepts whatever it receives from a user agent you are bound to create a security gap — but that holds true for anything users send you though, not just JWTs. JSON Web Token is not that hard to grok, and it isn't a 'foot-gun' technology (just practice trigger discipline — i.e., read the documentation).
I'm a Java guy, so I'll limit my experience to the libraries available there, but of the four Java libraries available, three provide strict validation of the signing algorithm out of the box, and explicitly document this in their examples and documentation (I think the fourth does too, but I haven't tried that one myself).
JSON Web Token is a neat standard that has a lot of good parts that can reliably be used to create and process authentication tokens. So if you are still worried about developers getting it wrong, then instead of saying 'don't use JWT', why not promote a safe subset of the specification instead and promote that? Call it 'iron-jwt' or something. It beats rolling your own solution.
Or if you want to be particularly constructive and feel that developers are misusing this technology, write a sensible, short, to the point implementers guide for using JWT and spread the word.
Which happens with regularity. This is exactly what led to the vulnerabilities mentioned in the article.
This is a bit like arguing against the claim that generating SQL queries by concatenating strings is unsafe. "Only if you completely bungle escaping the parameters", right? As it turns out, that's an extremely common mistake. It's easier to use a known-safe practice like parameterised queries than it is to rely on developers avoiding this pitfall each and every time they have to execute an SQL query.
Likewise with this. We know that negotiating the algorithm is subject to mistakes, and it offers no benefits. So instead of relying on developers avoiding this pitfall each time, let's avoid the pitfall altogether and get rid of negotiation.
There's a human aspect to security that's being missed here. Saying what boils down to "well developers shouldn't write insecure code" doesn't actually stop developers from writing insecure code. You can point the finger after the fact, but if you want to actually improve security, we need better standards than this.
No, you document how it should be done. Any major database layer using SQL provides parametrised SQL queries, and strongly suggests developers use that (usually in the quick start guide). You can still do your own concatenation, but aren't advised to.
The JWT libraries that are up-to-date and well-documented don't recommend blindly trusting the algorithm set in the header either, they recommend safe approaches such as configuring your whitelist and letting any unlisted algorithms fail. For example:
Only if you completely refuse to read even the basic 'getting started' documentation will you get to this kind of weird vulnerability. The issues with this part of the JWT standard have been addressed by the libraries, what remains is just a little bit of effort on the part of the developer — which may be expected from someone writing security critical code.
That is, don't blame the hammer for the shoddy carpentry.
> So instead of relying on developers avoiding this pitfall each time, let's avoid the pitfall altogether and get rid of negotiation.
So write a concise pamphlet that can easily be shared with all of the JWT libraries in an issue/bug report. If there is a good argument for having to explicitly whitelist algorithms, they might welcome the suggestion.
Some libraries already do this mind!
The solution is better tractors. Don't blame the user.
In this case:
- crypto_auth, or HMAC-SHA256 by itself, for authentication
- crypto_secretbox for symmetric encryption
- crypto_box or TLS for public key encryption
That sounds like it would require a greater amount of knowledge to do correctly than "careful JWT".
It also ignores the role JWT plays in promoting bad security practices
It's only when you leave things open, or use a poorly implemented library that you have issues. It's a really easy standard to understand, and frankly most of the library implementations suck, but it's really easy to roll your own, and lock it down to the single implementation details you use internally.
I call this "blaming the user for the designer's error-prone cryptographic designs".
The problem with JOSE (the superset of specifications that includes JWT) isn't libraries written by careless developers, the problem with JOSE is the standard itself.
If in doubt, ask a cryptographer.
The problems, for people who don't want to read articles from comment links, are:
- JSON Web Signing
- alg headers
- "This Header Parameter MUST be present and MUST be understood and processed by implementations."
- JSON Web Encryption
- RSA with PKCS1v1.5 padding (power word: Bleichenbacher 1998)
- ECDH over NIST curves (and, in practice, invalid-curve attacks)
- AES-GCM included in a list of asymmetric algorithm choices, for added confusion
- AES-GCM for shared-key encryption, without guidance over nonces or key rotation
- Version (v1, v2, v3, etc. which hard-coded the algorithm choices)
- enc -> crypto_secretbox()
- auth -> crypto_auth()
- pub-enc -> crypto_box_seal()
- pub-sign -> crypto_sign_detached()
You (and the linked article) are selectively quoting the JWS specification too, to imply that the server needs to always handle a token presented to it, regardless of the specified algorithm; the article is misleading the reader by doing so. The RFC also states,
> Even if a JWS can be successfully validated, unless the algorithm(s) used in the JWS are acceptable to the application, it SHOULD consider the JWS to be invalid.
As for the section about "MUST be understood and processed": the standard is simply saying that implementors must use the "alg" field. You can't ignore it. Extending that to "must process anything the client sends" results in nonsense.
The suggestion to "use NaCl" ignores the entirety of JWT claims, and foists implementation of that functionality onto every single consumer that needs them. (And you hope that they recognize that they need them.) Centralizing implementations of cryptographic code into a few, well-vetted implementations is better than just throwing our hands up and telling the community to fend for themselves. While some JWT implementations had bugs in the past, these same bugs could easily be present in the custom implementations of your proposal, of which there would be many.
Algorithm negotiation within an otherwise static protocol incurs complexity, which has a security cost. Just as importantly, it doesn't address the real origin of cryptographic protocol flaws. Note how, in order to correct cryptographic flaws in TLS, we had to push people first from SSL3 to TLS 1.0, and then later to TLS 1.1.
1. "Hard-code" the simplest possible sound crypto construction that solves the specific problem the protocol is meant to solve.
2. Put a version on the whole protocol.
3. If the crypto constructions later need to be amended, upgrade the whole protocol.
The anti-pattern is attempt to use a static "outer protocol" with a negotiated and regularly changing "inner protocol" --- that's an architecture we know from experience does not work well.
You know you're in trouble when developers are forced or even encouraged to make decisions between things like RSA and ECC.
> * Flaws in crypto protocols aren't exclusive to, but tend to occur mostly in, the joinery of the protocol. So crypto protocol designers are moving away from algorithm and "cipher suite" negotiation towards other mechanisms. Trevor Perrin's Noise framework is a great example: rather than negotiating, it defines a family of protocols and applications can adopt one or the other without committing themselves to supporting different ones dynamically. Not only does JWT do a form of negotiation, but it actually allows implementations to negotiate NO cryptography. That's a disqualifying own-goal.
My proposal replaces the joinery with "select a version". You don't get to mix-and-match primitives. You won't fall into the trap of Reasoning By Lego.
> Centralizing implementations of cryptographic code into a few, well-vetted implementations is better than just throwing our hands up and telling the community to fend for themselves.
I'm not throwing my hands up and saying "fend for yourselves". I've outlined what needs to be changed to make it secure, and said I'll write a formal spec when I have the time, as keeping a roof over my family's head takes precedence over doing a lot of thankless unpaid work.
I agree wholehearted that you don't want to give the end-user mix-and-match primitives. But I don't think that JWT really gives you that, at least at the level I think you're discussing. For example, JWT doesn't let you choose an asymmetric cipher and a hash algorithm; you have to choose a precomposed whole, such as "RS256" (RSA w/ SHA-256) or "HS256" (HMAC w/ SHA-256). To me, this seems equivalent to NaCl, in a sense. In JWT, I must choose one of "RS256", "HS256", etc. In NaCl, I must choose one of the crypto_* functions. Are these not both giving the user equivalent choices between equivalently pre-composed functionality? (Are you simply saying that the JWT standards offer too many choices between essentially equivalent cryptographic combinations, such as multiple choices for HMAC or RSA, and/or that you disagree w/ the exact combinations offered?)
> I've outlined what needs to be changed to make it secure, and said I'll write a formal spec when I have the time
Perhaps it's what you're leaving unsaid, but the gist I get from your proposal is that you're discarding the entirety of JWT's claims section, essentially equating a JWT with an authenticated and potentially encrypted message; i.e., the output of the NaCl functions that you present.
Much easier for adoption to use a "better version" of the widely-used tool you're already using than some newfangled thing that a guy on HN said was more secure.
When I'm not dealing with client work, I'll write a replacement for JOSE that has the properties I outlined above. Until I find the free time for this, things that increase my income take precedence.
The problems with JWT being addressed are in the domain of cryptography designs, so it's natural to criticize the cryptography components.
The other problem with JWT is how people use it: http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...
Thanks, I didn't know what JWT meant. When I first read the title I thought it would be about the Google Web Toolkit (GWT) which I incorrectly remembered as being named Java Web Toolkit (JWT) because with GWT you use Java.
Author would do well to edit their post, changing the title to "Things to Use Instead of JSON Web Tokens" and replacing the first occurrence of "JWT" in the body text with "JSON Web Tokens (JWT)".
In addition to helping visitors from HN, this change could also improve the possibility for others to find the text in the future when they search for JSON Web Tokens using a search engine.
Makes me glad me and my colleagues are Dutch; we just pronounce the names of the three letters in Dutch — yaywaytay.
Libraries that provide one solid hard-coded implementation based on current best practices are a much better idea for most projects, but of course what then with giant specs like Oauth and OpenID Connect...
Instead OP made sure these tokens can't be used outside their fork, and thus only in Go.
For readers like me, this means "I seriously hope you guys don't do this."
I've read this one specifically many times without knowing what it means, whereas the author may think that an interested reader would look up it's meaning, in my case I rejected what I read and moved on.
Even if you do know what it stands for, that's still only half of the true meaning.
EDIT: By which I mean this sort of language is not a good fit for an audience that doesn't follow 4chan memes.
> Even if you do know what it stands for, that's
> still only half of the true meaning.
Did you even read that article?
> You could imagine excuses being made about the people who died, or lost their fingers or hands; they were inattentive, they weren't following the right procedure, bad luck happens and we can't do anything about it, etc.
"You only die if you completely bungle the coupling."
That analogy would only really make sense if JWT was always a risk unless the developer set it up properly. But the reality is that the majority of JWT libraries are up to date and don't contain the implementation flaws mentioned.
We could tell users to never use a specification because sometimes they might use a library that wasn't implemented right. But there are legit usecases for JWT that the alternatives don't 100% cover and the risks are entirely manageable.
So I agree that instead the advice should be "use responsibly", which means use a popular, up-to-date, and well vetted library. All web security authentication options comes with advice like this - just look at OWASP.
Unless the state of JWT libraries is terrible, this isn't a big thing to ask. From my experience in two languages (Ruby/Node) there have been high-quality JWT libraries available where the maintainer has stayed on top of known implementation issues with JWT.
Eventually JWT libraries in each major language will mature (if they haven't yet already?) and the risks will be quite minimal. The industry (train operators) aren't ignoring the risks here and letting people die... it seems to me that the library authors are adapting.
the client gets to choose what algorithm they want to use
JWT.io has a comprehensive list of not shitty implementations of JWT. It's literally the first page I see when I google JWT.
NaCl/libsodium are single libraries with FFI wrappers in various languages.
Comparing the two is like trying to decide on the quality of the DirectX vs. OpenGL APIs, by the quality of their implementations. If you've only got one library (DirectX, libsodium), and you're throwing a lot of users at it, it's probably going to be pretty solid whether or not its API is designed well. If you've got a lot of implementations (OpenGL, JOSE/JWT), you'll probably get some crap implementations, just by the fact that there will be "core implementations" that get used+tested in production, and then "edge implementations" that were just written to scratch one guy's itch and have no battle-hardening.
You can argue, separately, that anything with crypto elements like JOSE/JWT has should be absorbed into some high-level crypto abstraction framework like NaCl/libsodium, so that it can also benefit from there just being one high-level implementation of it.
But JOSE/JWT is a lot more than crypto; for one thing, it's also JSON. Do you want there to be a JSON parser in libsodium, the way there's an ASN.1 parser in OpenSSL? This is the path that lead OpenSSL to its current "everyone uses it for everything but it's so bloated that you can flip over any random rock and find a bug" state.
Author clearly doesn't appreciate that the ultimate truth of any authentication scheme is you should not trust anything from the user. So what if you take away a clients ability to specify what algorithm a piece of data is signed or encrypted with - if you blindly just accept whatever a client did and proceed then you're always gonna find yourself vulnerable.
Intel didn't use JWT tokens or pass a client header - they just trusted what the client sent and landed themselves in the same mess. I make the point to illustrate that what real security pros do is make sure basic checks like validating user input is done regardless of what specification or algorithm is involved.
For example, we should be doing everything we can to ease the mental burden of writing secure code. Whenever we can reasonably eliminate the possibility of a vulnerability, we should. Even if someone has to be an idiot to make the mistake, just don't make it possible to make the mistake in the first place. We should also try to reduce the amount of potentially-malicious input, reduce the amount of options and special cases, etc. Simplify.
Usability improvements pay off multiple times. They make developers' jobs easier because there's less code to write and the code that does get written is easier to reason about. They make security auditors' jobs easier because there are fewer "dumb" mistakes to check for, and that means more of the audit time can be spent looking for deeper flaws.
(Nitpick about the phrase "validating user input": The user's input should never be trusted, but that doesn't mean we should write code to try and decide whether the user's input is "safe" or "unsafe", as that can be impossibly-hard depending on what's happening after the validation check. The code should just be secure no matter what the input is.)
There's a bit in Google's paper about the Chubby lock service where they note that a big reason for the success of that project was all the concessions they made for developer usability.
> Still, the railroads stuck with them because link-and-pin couplers were cheap. You could imagine excuses being made about the people who died...
You have to think that every train coupling on every train at the same time is a pretty big and very real excuse. No single operator could change first, unless they were a near monopoly. I'd guess that railroad companies weren't big fans of their trained employees getting crushed.
So tired of the trope that big evil corporations put zero value on the lives of their employees. Fact is, they value them as much as the replacement cost + cleanup cost + downtime + bad PR, which in many cases is pretty high. Also, sometimes the owners actually aren't evil monsters.
1. The client should not get to 'choose' which algo it uses, it merely states what algo it did use. Emphasis should be on the server to verify that the client didn't do (or claim to do) something bad.
2. What happens if in 5 years time the algo you've standardised on is suddenly comprisable? You've ripped out the mechanism that updated clients could theoretically use to signify that they're using a newer algorithm. Take for example Git and its recent SHA collision issue.
3. By emphasising that 'clients should not get to choose', you're making the case that the choice is of itself a weakness it is not. The weakness is often in how inputs are processed (and far more often than any weakness in algorithms).
If you are using JWT and someone breaks SHA2, you still have to worry about downgrade attacks. To evade downgrade attacks, you'll have to detect the protocol the client sends, and reject tokens that specify SHA2 or below. Or, roughly the same position you'd be in with one good algorithm: still in need of a backwards incompatible upgrade
The idea was certainly not that services blindly accept any token from every algorithm shipped in the library.
Your browser supports TLS 1.0, should you throw it out?
> If someone breaks SHA2...
JWT has a built-in mechanism for handling this: expirations.
The expiration is just an additional value in the payload that the implementation is supposed to check against.
Unlike JWT, hundreds of the world's best security engineers at various browser companies are working on mitigating the situation as well as possible.
The decision to include RSA was misguided, and there's no point in letting users choose the size of the hash or whether to use key-wrapping or not (in JWE - the answer is always use key-wrapping with GCM when in doubt, 96 bits nonces are not long enough if you send many messages). There are other confusing knobs which shouldn't have been there.
It's best to go the NaCl way and choose 1 algorithm considered best-in-class for each type of crypto, and leave space for adding new algorithms in the future when the old algorithm becomes compromised. Cypher breakdown events like Shapocalypse don't happen out of the blue. We usually get gradually improving attacks at least 5-10 years before that breakdown happens (SHA1 had weaknesses found as early as 2005).
JWT is extremely useful when you want a one-time use token to pass a claim to another system. For example, employee portal generates a link for a user to check their available leaves in the HR system. Since the user is logged in to the employee portal, they shouldn't have to login to the HR system. JWT is great for this use case. But for general sessions management, there are better solutions.
EDIT: I just saw your spreadsheet. Perhaps you have gotten a bad impression of OAUTH2 by looking at some client libraries. Many of these are made complicated by trying to handle all aspects of OUATH2 instead of the single-sign-on flow, which is simple enough you shouldn't even need a library.
If JWT is too complicated and confusing, OpenID Connect inherits all that complexity and then adds some more.
The authentication event is a regular JSON object.
There is no need to validate it since it was received from a server-to-server TLS-protected HTTPS request.
This is not anywhere near as complicated as using JWT for session storage directly.
But if you go as far as not verifying the ID Token for, what do you need OpenID Connect for? Just use plain old OAuth 2.0.
* The Discovery URL
* Standardisation in the subject name
* The userinfo endpoint
re. OAuth2 - I wrote the spreadsheet from the point of view of a developer of REST APIs. As a developer exposing APIs, OAuth2 is only useful if I want my users to decide what data they want to share with third party developers.
Have you seen how Steam performs email verification as part of a 2FA successful login flow?
For password-recovery flows, you may still want to log (audit) attempted password recovery, which means a database hit anyway. From that perspective, "magic link" is good enough.
> OAuth2 is only useful if I want my users to decide what data they want to share with third party developers.
When you have logged in with an OAUTH2 provider, you're given a token which can be used against the API in any of the mechanisms you describe if your users are likely to do more than one request. Even if you only ever authenticate over your own OAUTH2 provider, this might still have advantages involving audit, reliability/availability, and so on.
One thing that I find very useful is using OAUTH2 to hand over authentication to my customers; they want to have their own password policy, and their own two-factor system, and so on. I don't want to implement that for every customer. Even when I implement a OAUTH2 provider for each customer I can do this easily, but nowadays, they might be able to use Amazon Cognito, or Azure to log into my API.
The first time someone wants to consume my REST API from within their dashboard, I'm already ready for this: I can build an SSO between their dashboard and my OAUTH2 provider (or just use theirs!) keeping everything separate from my actual business API, and the customer feels like this was a massive customization.
And so on.
It states about Stateful Session cookie that it's "supported by all web frameworks and browsers.", but Flask is an example of popular web framework which doesn't support them. The "session" in Flask actually uses Stateless Session cookies.
Also, what about Hawk? https://github.com/hueniverse/hawk
If jwts are bad, xml digsigs are "literally cthulu".
The other platforms I've used or integrated with - Tivoli, Layer 7, Ping Federate, a huge hack job written in PHP - all took weeks/months to get working.
That said I haven't tried Spring SAML recently, so maybe that is painless now. But probably not
For our usage, even that was overkill and we are using Ipsilon (https://ipsilon-project.org/), with IPA backend. It is more quirky, docs are scarce, but it works for us.
On app side, it is mod_auth_mellon.
But as alternatives go, has anyone tried using Macaroons?
They've been mentioned in HN a few times but I've never seen the tech catch up, even though it looks like a very cool way to implement authorization.
Is there any particular reason why?
Here are some resources I've found and I think Macaroons is a very interesting concept. Even the paper is accessible to someone like me, without any real security or cryptography expertise.
Research paper: https://research.google.com/pubs/pub41892.html
A new way to do authorization: http://hackingdistributed.com/2014/05/21/my-first-macaroon/
(This link has an invalid HTTPS certificate if that concerns you): https://evancordell.com/2015/09/27/macaroons-101-contextual-...
After I found this, I've always wanted to try it out but haven't had the chance. Does anyone has any experience or comments about it?
I've used macaroons in several settings and highly recommend them. The only thing really missing for "wider" adoption is something like a standard caveat language, the lack of which keeps macaroon use pretty localized to your specific deployment.
I'm still hoping the tech catches up somehow and then there's probably going to be more people interested in a standardized way to define a caveat.
When I was toying with Macaroons some time ago, I thought that I could use JSON to format the caveat and toyed with the idea of defining a set of useful caveats, but never got too far.
If it wouldn't doom them to obscurity, I'd personally like to see a set of vocabularies that macaroon users can use (a la semantic web). This is probably as simple as namespacing caveats with globally unique identifiers to indicate the vocabulary.
Something like that sounds reasonable. I would just maybe add some sort of SET operations logic that you could use to build more complex stuff.
Thus I would be able to delegate to someone a Macaroon that has admin access to all of Project P EXCEPT (or MINUS) access to Project P's children in list L. Or maybe delegate with access to any object in List (L1 AND L2). Sort of like a SQL query (to an extent).
At least that was the use case I needed back then, i.e. How to secure an API in a hierarchical way, such that certain user roles can access only certain children of a certain parent node.
It's the nature of a grab-bag metastandard that it can be twisted into a poor version of any related standard. That's the problem with standards like these.
Do you think that Macaroons are a sane alternative to JWTs? or in general, any comments on using them for authorization instead of OAuth2 or any of the alternatives?
Edit: I think I do understand that some of the problems with JWT is that they have a lot of moving parts so the attack surface is much greater as well as not helping the developer avoid getting shot in the foot.
Macaroons by comparison seem much simpler to my untrained eye, in that they are "just" a chain of HMACs, so that any modification of the different sections of a Macaroon would render it invalid.
So why would Macaroons be or not be a suitable alternative to JWTs or other auth methods?
Macaroons and JWTs are two fairly different things. JWTs are a combination of two things: 1. A standard encoding for authenticated JSON objects, and 2. a fairly ad-hoc standard set of field names.
I say ad-hoc because it's pretty much up to you what you actually put e.g. in iss, sub, jti, etc., and most of them are optional. So applications of JWTs still have to make those decisions (e.g.: If you want to convey that a JWT is issued by a particular key, do you (a) put the hash of the key as the "kid" in the header, (b) put the hash of the key as the "iss" claim, or (c) neither or both. The answer seems to be whatever you want.)
So at the same time, JWTs tie your hands by deciding encodings and identifiers for you (e.g. bytestring valued claims must be base64 encoded into strings, which then gets base64 encoded again for signing (!), keys should be JWKs, ...), but also don't actually make the important decisions for you.
Macaroons (as described in the paper) are more abstract. There is no standard encoding, or registry of hash algorithms. And indeed there is no standard language for caveats (aka claims), since that is entirely application specific. So.. macaroons make no decisions for you at all.
The main point of macaroons is delegation: If you want to pass some authority to someone and let them pass on a subset of that authority to someone else, Macaroons do that well. JWTs don't.
As a consumer/verifier of macaroons, they allow you (through third-party caveats) to defer some authorization decisions to someone else. JWTs don't.
If you just want to protect the integrity of a cookie, or an OAuth token, and nobody but you, the issuer, should touch it, then you just have to sign it - so macaroons and JWTs will both do fine. JWTs have the advantage of fixing some of the details for you.
If I understand correctly Macaroons would be very suitable for example, to build a framework for a Single-Sign On service, such that an Auth Server mints the Macaroons depending on certain access policies and then whatever services need to be secured behind the SSO can implement a verifier that consults the Auth Server for the access policy and then grant or restrict access to a request with an attached Macaroon.
Then if I decide to delegate access to someone else (or e.g. to myself in another device), I can attenuate it according to some specified parameters (by time, by allowed operations or access, by device, etc) and then send it over. Then whoever I delegated access to could in turn do certain restricted requests to the services behind the SSO without even going to the auth server to get an access token again (as long as the attenuated Macaroon is valid).
Is this correct?
Since I discovered Macaroons I've been wanting to fiure out how well would they fit to build an auth server to restrict access to an API by user roles, requested instance or even API routes for example. This way implementing things like "Share" links should be easier/safer as well as hierarchical access to an API (i.e. if I can access route X I can also access routes below X, or not, etc).
Off topic: And THIS is why I absolutely love Hacker News. After asking something I got a response from both an author of the paper and the author of the blog post where I found said paper. This community is just amazing! :)
Yes. While SSO is best implemented with a 3rd party caveat, you can also do it with simple delegation: The final verifier, i.e. the server that controls access to the resource, mints the initial macaroon and gives to the auth server. The auth server hands out time-limited attentuations of that to authenticated clients (who can further attenuate and hand to others).
The downside is that the auth server has full unlimited access to the resource, so compromising that "master" macaroon would be catastrophic. With 3rd party caveats, the resource owner can make a pact with the auth service (or multiple ones) such that a sign-off from an auth service is needed, but not sufficient, to access the resource.
> Since I discovered Macaroons I've been wanting to fiure out how well would they fit to build an auth server to restrict access to an API by user roles, requested instance or even API routes for example
Ultimately this boils down to defining a caveat language to describe these policies and building an evaluator for it. The paper has some guidelines for writing evaluators, but the gist of it is: Verify the integrity first. Make sure evaluation of each caveat can only ever restrict what's already allowed, and never escalate.
You can do everything on the client side and you don't have to mess with OAuth.
Edit: Sorry if I came across as dismissive of your article somehow.
What I meant to say is that some comment from a security expert, specifically on the Macaroons concept/implementation would be great, because to my untrained eye it looks very nice and secure but then again, I'm not an expert and thus can't trust myself on that.
For example this piece of code (fragment taken from ), restricts the Macaroon usage to given account... Or does it?
M.add_first_party_caveat('account = 3735928559')
As far as I can see Macaroons have interesting ability to be adjusted by intermediaries to limit their scope. Say you have Macaroon that gives access to your Gmail account you can "attenuate" it to limit scope only for emails in the next 10 minutes without contacting third party. That'd be very useful for OAuth like flows...
Sure, some implementations of JWT have had bugs in the past, but this hasn't been an issue for quite a while and it's definitely not an issue with the RFC itself. It's the same as if you blamed the TLS/SSL RFC for being responsible for the heartbleed bug in OpenSSL - It makes no sense.
>> You might have heard that you shouldn't be using JWT. That advice is correct - you really shouldn't use it.
This type of blanket thinking is dangerous. There are cases where JWTs are practically necessary and unavoidable. Whenever an extremist blanket idea like this catches on in this industry, it becomes a major pain to have to explain to people over and over why in THIS SPECIFIC CASE it is actually the best solution possible.
It makes some sense.
Features are attack surface. Each extra feature or option your protocol enables is more code you need to manage. So careful decisions need to be made: just because you can easily specify a feature for many use-cases in your protocol doesn't mean you should, because once you spec it people might just use it.
For heartbleed, for example, why was the TLS Heartbeat extension ever specified for TLS over reliable protocols? It serves no purpose: TCP has TCP_KEEPALIVE if that's a thing you need. But it was specified, and because it was specified it was implemented, and then it became attack surface that needed to be protected. It wasn't. So I guarantee to you that if RFC 6520 had been more restricted in scope, the Heartbleed attack would not have happened (or would have been a much more minor story, I can't remember if Heartbleed affected only the TLS and not the DTLS implementation).
Did we get SSL/TLS right? No, we've failed a few times, but we're doing better. Why is that? People are paying attention more that ever to OpenSSL and now we have competition with LibreSSL and the like.
Side note: The irony of his coupler example; it's a flawed spec, not flawed implementations... the implementations followed the spec but the spec was dangerous from the start. JWT is the opposite: good spec, but a long time ago some early adopters didn't get things right.
TLS is a legacy protocol. Nobody likes it very much, but it's such an established standard that you don't have much choice than trying to fix it. The same might happen to JWT, but in both cases we'll be fixing the specs, not just the implementations.
I wanted to go with a simple, minimal NaCl-based system but in the end did implement a lot of OpenID Connect in the hope it would make interoperability easier with existing client libraries. I don't want to write a client for each and every programming language other people in any related projects would want to use. That in my opinion is the value of something like JWT: you can tell people up front that that's the way they're going to get the user's ID data, no matter how much server or client implementations will be switched around.
I feel that when faced with a spec like OpenID Connect and OAuth 2.0, it's not necessary to implement more than what is strictly needed. If you don't need all the flows and algorithms in your project, don't implement and don't accept them. The parts you implement should comply – why base it on a spec if you throw any interoperability out of the window – but don't waste months of trying to correctly implement all of a huge corporate spec if that doesn't make sense for the size of your project or organisation. Complete implementations might have value only if that's your main product and you need that line to sell it.
I use JWT only as ID, not as session, allow only one server-chosen algorithm for signing, and rely on TLS for encryption.
There's clearly a need for up-to-date "web-approved" standards to pass crypto-friendly data structures around – or maybe I'm just not familiar with any recent efforts. Normalising and serialising JSON is pretty error prone...
Discussion on cryptography and particular implementations aside, I think this is sound and I normally follow this when judging technologies.
So, as someone who does some work in crypto engineering, arguments about JWT being problematic only if implementations are "bungled" or developers are "incompetent" are sort of an obvious "tell" that the people behind those arguments aren't really crypto people. In crypto, this debate is over.
I know a lot of crypto people who do not like JWT. I don't know one who does. Here are some general JWT concerns:
* It's kitchen-sink complicated and designed without a single clear use case. The track record of cryptosystems with this property is very poor. Resilient cryptosystems tend to be simple and optimized for a specific use case.
* It's designed by a committee and, as far as anyone I know can tell, that committee doesn't include any serious cryptographers. I joked about this on Twitter after the last JWT disaster, saying that JWT's support for static-ephemeral P-curve ECDH was the cryptographic engineering equivalent of a "kick me" sign on the standard. You could look at JWT, see that it supported both RSA and P-curve ECDH, and immediately conclude that crypto experts hadn't had a guiding hand in the standard.
* Flaws in crypto protocols aren't exclusive to, but tend to occur mostly in, the joinery of the protocol. So crypto protocol designers are moving away from algorithm and "cipher suite" negotiation towards other mechanisms. Trevor Perrin's Noise framework is a great example: rather than negotiating, it defines a family of protocols and applications can adopt one or the other without committing themselves to supporting different ones dynamically. Not only does JWT do a form of negotiation, but it actually allows implementations to negotiate NO cryptography. That's a disqualifying own-goal.
* JWT's defaults are incoherent. For instance: non-replayability, one of the most basic questions to answer about a cryptographic token, is optional. Someone downthread made a weird comparison between JWT and Nacl (weird because Nacl is a library of primitives, not a protocol) based on forward-security. But for a token, replayability is a much more urgent concern.
* The protocol mixes metadata and application data in two different bag-of-attributes structures and generally does its best to maximize all the concerns you'd have doing cryptography with a format as malleable as JSON. Seemingly the only reason it does that is because it's "layered" on JOSE, leaving the impression that making a pretty lego diagram is more important to its designers than coming up with a simple, secure standard.
* It's 2017 and the standard still includes X.509, via JWK, which also includes indirected key lookups.
* The standard supports, and some implementations even default to, compressed plaintext. It feels like 2012 never happened for this project.
For almost every use I've seen in the real world, JWT is drastic overkill; often it's just an gussied-up means of expressing a trivial bearer token, the kind that could be expressed securely with virtually no risk of implementation flaws simply by hexifying 20 bytes of urandom. For the rare instances that actually benefit from public key cryptography, JWT makes a hard task even harder. I don't believe anyone is ever better off using JWT. Avoid it.
This should probably be the first thing anyone thinking of using JWT reads.
This is certainly true for schemes that require trips to a source-of-truth database to authorize a token (c->auth, c->resource, resource->auth). It's also true for schemes where the token is associated with capabilities that are loaded from a database. Using JWT to implement RBAC is flawed.
However, there is a strong use case for the token carrying its own capabilities -- that is, a token that is more than just "20 bytes of urandom".
If a resource service can derive the capabilities associated with a token generated by a trusted authentication service without contacting that service, that has real world implications for lower latency, higher throughput applications that are simpler to compose and operate.
As far the cryptographic credibility, the idea behind capability-based security is an old one, and I'm sure you're aware of the research. This particular spec may be problematic, folks may be misunderstanding and misusing the primitives, but the underlying idea is sound.
Previous HN discussion of CapSec Wikipedia article
The issue isn't with capabilities, or delegated authentication, or public key tokens, or even standardizing any of those things. I think at this point I've been pretty clear about what I believe the issues actually are.
Do you have a standard that you would recommend for any of those things?
Echoing tptacek's comment above: the problems with using those individual pieces is in the joinery - combining them in ways that are broken.
Can you point us to an article showing how to implement this for a web app communicating with an API? Lacking crypto expertise and documentation, the average programmer is going to use something like Auth0, and if his users' recipes or jogging history is compromised, so be it.
I recently set up libsodium for a client running Node.js on the server, and could work on this for you as well, if you want to send me an email, I can send you my rates.
My value proposition is I've shipped a lot of useful things for companies, and found security vulnerabilities, and those skills are in demand these days, I guess.
You can read more here: https://burke.services
But you need alternatives.
NaCl is not alternative, because you need to base a protocol on top of that. Noise is not an alternative because it's not meant for the same purposes. Fernet is the only thing close to be an alternative, but it lacks useful features (for instance, how do you specify a key ID for key rotation), supports only symmetric encryption, has a weird cipher choice, and barely gets any library support.
Regardless, bad engineering is bad engineering. Bad security engineering gets people hurt. It's not that JWT doesn't do the best job it could do: it's that it's a snakepit of implementation traps that create vulnerabilities. For me, the argument ends there.
A good read at:
The problems with JWT can be divided into two classes:
1. Too many options, making it easy to misuse.
Even if you disallow the 'none' algorithm (like most newer
JWT libraries), there are still many other ways to break
2. Misguided cipher choice.
AES-GCM (easy target for nonce-reuse), RSA, NIST P-curves.
So in short, even if you're using encryption, JWT just makes it easy for the crypto itself to fail.
What JWT is doing is actually not that special as it is just a standardized container (akin to MKV and supported codecs) inside which existing technologies can be used. Easy to write something similar if you know what you are doing. I did that before, but still missed some extra verifications already build into JWT.
Of course, the chosen technologies allowed to be used inside a JWT can still be prone to vulnerabilities. I am not sure if that can be blamed on JWT. People should still think about which options to use.
More broadly - JWT isn't just about the exchange between a single client and a server; the choice of "use cases" misses a very real constraint of multi-party protocols. Within the context of OpenID (or OAuth more broadly), it's about the relationship between an end-user, a resource owner, a client and an authorization server -- all of whom need to be able to interact with a token, and often offline.
To non-cryptographers, this idea that protocols need to be constantly ready to accept new ciphers in case there's a break in one of the old ones is a very big deal, so much so that every amateur crypto protocol design includes cipher suite negotiation as the centerpiece of the protocol. In reality, it's a minor concern. Which the exception of RC4, pretty much every TLS ciphersuite flaw has been the product of flawed joinery and not problems with underlying ciphers --- which is why the major breaches have forced us to move people not from one ciphersuite to another, but instead from SSL3 to TLS 1.0 to TLS 1.1.
For an expert design example, look at Trevor Perrin's Noise framework. An even simpler idea: simply specify a single set of coherent cryptographic constructions, and then version the whole protocol --- if there's a serious break in your protocol, you'll almost certainly have to make changes across the protocol anyways --- and upgrade the whole thing.
I haven't thought this through fully, but as far as I can tell ecosystems on the web evolve. And so I think it's probably a good idea that we architect things for the web in such a way that we don't inhibit that evolution. When you put a decision like encryption algorithm in the client's hands does it feel to anyone else that the security will evolve more rapidly, and thus remain more robust? When the client is deciding, there's a larger pool of people "voting" for what is an acceptable level of security. Even though a lot of those "votes" will be based on the default settings of a library, that library will over time become less popular as more and more people consider it unsafe.
By the same token, if a particular service (server-side) does not keep up with that evolution, fewer and fewer people will use it as other (safer) services pop up.
It is suggested that to authenticate a user one should use TLS. That might be true for a login form, but not beyond that. Once you have logged the user in, you need to continue authentication on every request. JWTs are one way to put this information on the client side without having to put any much trust into it.
The second example is a simple asymmetric encryption example which... for some reason JWT is not a solution for? I've used Ed25519 plenty of times with JWT (custom algo header in this case), so I see no problem there plus... I don't think this is what JWT is actually trying to solve.
The third example is encrypted data to the client, which is also something JWT isn't trying to solve, this is what JWE is for. JWT is purposefully unencrypted and I'm not sure how many developers would actually pretend a signature is encryption.
The last part is an actual example of JWT use cases, in which case however the author blabs on about the (in)famous "algo=none" bug a lot of libraries had. I've specifically used the Go library mentioned and strictly enforcing the algo is a nonbrainer if you are using a custom one anyway. On the other hand, I still use HMAC for a seperate token for short-term authentication over endpoints (to make blacklisting logins easier).
So JWT simply gives me some flexibility in sharing common code for authentication, the same code can consume the long-term tokens and the short-term tokens and much more if needed in the future.
I'm not saying JWT is the end all for problems, but it's rather easy ready-to-use solution for some of my problems.
Why write a signature library when there is one ready to use that, with care, is safe to use?
No idea about "JWT" but maybe this is part of the psychology that keeps schemes like SSL/TLS in use. (Part.)
Do you think there are people who are actually proud of achieving complexity in a specification or implementation?
If you have a centralized system dealing w/ authentication, this doesn't work, as now everything that needs to verify JWTs needs the secret. The support for RSA, instead of HMAC, is there to meet a different set of requirement.
What people fail to remember is that JWTs — and the libraries that work w/ them — do not just wrap data to be authenticated. They also handle verifying the various claims on a token — is this token applicable here? is this token expired? — things relevant to an authentication token, not just a mere signed blob of data. The suggestion to leave those to be reimplemented by every single end-user is bad advice.
If the article is blatant self-promotion then I could see the problem with that and those articles don't get anywhere on HN/Reddit/etc so they won't rank well on Google either. But I've seen some high quality docs/libraries coming out of companies like that and it's a great thing.
The aren't responsible for developers doing poor implementations of JWT. And other developers equally have a voice such as the OP if they have a problem with the choice of technology they are promoting.
JWT/JWS libraries that handle all the validation alone are treacherous. You should ALWAYS parse the JSON before hand, perform your claims and header(s) validation, and then pass the payload/header/signature. If anything, it's computationally less risky than running the signature blindly through a signature validation function and then checking the header.
I don't have a problem with JWT, only in that having the signature method, etc configurable in the first place as an implementation detail is probably a bad idea.
As far as I know there are no algorithms that exist today that we can guarantee will never be broken in the future. So algorithm choice inherently must be decoupled from the specification.
EDIT: Or a naive server implementation for that matter...
JWE also support more than just RSA - it definitely does support Elliptic Curve Cryptography (although I would prefer if they chose Curve25519 instead of the NIST curves), and they are used with EC Diffie-Helman in a certain construction that actually gives you more forward secrecy than NaCl box.
NaCl Box offers you no forward secrecy, whereas all the ECDH-ES algorithms in JWE offer you partial forward secrecy, which saves you when the sender key is leaked, but not when the receiver key is leaked. That's the best you can get: we can't have a two-way perfect forward secrecy in non-interactive protocol like JWT since we can't perform a direct negotiation of ephemeral keys between sender and receiver).
Of course, we're actually comparing apples to oranges here: you can get the same partial forward secrecy guarantee with libsodium's sealed box (not in the original NaCl):
As for an alternative to JWT/JWE, I think it really depends on what people want, but I have slightly different suggestions:
1. For simple access tokens in low-medium load scenarios, access tokens stored in Redis are probably simpler to implement than JWT + revocation.
2. If you don't have any secret information inside the token, HMAC-SHA256.
3. If you have secret information, I'll actually go with libsodium's XChacha20-Poly1305 AEAD:
secretbox doesn't support AEAD, but you often have external data that you want to tie the token. It's very easy to implement exactly the same construct with XSalsa20, so it will really be just secretbox with AEAD support, but that's non-standard and you won't find any native library support.
4. Public signature of public data with Ed25519 (this is NaCl's crypto_sign).
5. Authenticated asymmetrically encrypted tokens: to get partial forward secrecy, the easiest way would be using sealedbox and then signing the result with crypto_sign. It's not the most efficient way to do this, but NaCl/libsodium don't have a tailored operation for this use-case, so you would have to use primitives directly.
All in all, not quite clear cut, and nobody is making a library that does that for you, so it's easy to see where JOSE is coming from.
Okay, dumb jokes aside, they're clearly talking about alternative ways to authenticate, so I wouldn't know how you could conclude this was about anything but JSON Web Tokens.
Just specify in the first sentence what you are talking about as full words. Is it that hard? Apparently it is.
Which makes sense. It has been a historically succesful language that served well for decades.
JWT wasn't even good when it was new.
Nope, JSON Web Token.
Idiots shouldn't write authentication code. Especially when credentials are involved.
Funny jwt-go was used as an example; it was never vulnerable to the alg-none attack:
Dead on. Sure, some early JWT implementations were poor... during the first few months that JWT was gaining traction. That was over 2 years ago. You may as well write an article about how Internet Explorer is awful, while referring to qualities present in IE 6. Disclaimer: I still dislike IE/Edge, but no longer have pertinent reasons as to why I maintain that opinion.
There is not a single valid criticism of JWT from a security perspective. The only criticism outside of security considerations I'd view as valid is that the length of the strings quickly becomes bloated for the amount of information contained within (ie: inefficient bandwidth and storage usage, the same complaint as with long cookie headers requiring more TCP packets).
see the "X.509" section here
> It's important to note that my experiment is not JWT.
> When you reduce JWT to a thing that is secure,
> you give up the "algorithm agility" that is a proud part
> of the specification.
I don't agree with him though, unless the standard requires to implement all of the available algorithms, one may choose to implement only those that he/she deems safe/worth.
Agreed. I view this flexibility as a developer feature, not a client feature.
> Finally, note that it is an application decision which algorithms may
> be used in a given context. Even if a JWT can be successfully
> validated, unless the algorithms used in the JWT are acceptable to
> the application, it SHOULD reject the JWT.
From what I understand from the above, the server side can decide to _always_ reject the "none" algorithm and still qualify as a valid implementation. The fact that the "none" algorithm is implemented or not by the library becomes a detail.
I'm using JWT for an API, and the server is choosing which algorithm to chose.
Really don't understand why a client should bypass server.
For example, both of these decoded headers are equivalent:
Obviously, these encode to two different values. If you reattach the wrong one, signature verification will fail.
Disclaimer: I maintain a Python JOSE library and have had to answer questions related to this on more than one occasion.
So I still think the header is superfluous even for this use case.
edit: in fact, the client needs to know that it's base64 encoded to even read the header in the first place.
This is pretty standard for rolling signing keys and api auth methods and all kinds of stuff like that.
How do you tell RS256 and ES256 JWTs apart - so you can figure out how to validate them - unless the JWT actually encodes that information?
The trick is that JWT APIs need to force developers to choose which algorithms they want - having a `decode_jwt` function is not a good idea, `decode_es256_jwt` is much better. It'd validate that the alg in the header is correct, and return a specific error if it's not - if that error is returned, the developer can try `decode_rs256_jwt`.
This is how I've designed the API used in my OpenID Connect implementation. It works wonderfully.
Which still support webbrowsers that were released before 2007. (But somehow, they can’t support Firefox Mobile. mmhhmk. Totally not a plot to get rid of mobile ad blocking.)
Or maybe - just maybe - the claim that specifications should be judged by their implementations is entirely nonsensical. The fact that Internet Explorer exists doesn't in and of itself mean that web browsing as a whole is a defective concept. The fact that single-ply toilet paper exists doesn't mean we should stop using toilet paper. Likewise, the fact that some JWT implementations are defective does not in and of itself say a whole lot about JWT itself.