- to revoke a JWT you have to blacklist it in the database so it still requires a database call to check if it's valid.
- JWT are to prevent database calls but a regular request will still hit the database anyway.
- JWT are very large payloads passed around in every request taking up more bandwidth.
- If user is banned or becomes restricted then it still requires database calls to check the state of user.
- JWT spends CPU cycles verifying signature on every request.
- JWTs just aren't good as session tokens which is how a lot of web developers try to use them as. Use a session ID instead.
Where JWT works best:
- when a client can interact with multiple services and each service doesn't need to do a network request to verify (ie federated protocols like OpenID). The client verifies the user's identity via the 3rd party.
- as a 1 time use token that's short lived, such as for downloading files where user gets a token requested from auth server and then sends it to the download server.
I think they're quite good at authentication. They're less good at authorization when you want to update that faster than expiry times.
Once you go down the path of checking a DB along side the JWT your design has gone off the rails. Either the expiry works for you or it doesn't. Don't try to "fix" it.
Don't blacklist, use shorter lived tokens and have the client refresh as needed. A 10-15m token is plenty long life and not so long as it's a huge risk window, more than even a shorter window,.
Stakeholder: “so you are saying that after a user is denied access they can still access the resources?”
Dev: “yes, but only for 15 minutes. Also, it makes our system more simple and decreases database calls, increase performance, ...”
Stakeholder: “nope”
"Authentication takes a few minutes to replicate throughout our systems so a SLO request should be resolved within a few minutes". Stakeholder: Ok sounds good
+1 on this... it can be up to a 30 minute lag in some orgs... oh, you have access to those systems for a while until things sync up... similar for LDAP/AD sync with Nix/windows.
Which makes sense when I think about how some of the more trigger happy orgs made a point of shutting peoples accounts of as they were being walked into a room...
For parts of the site where you need to boot somebody instantly... just hit up the authentication server on every request to validate the session. For parts of the site where it doesn’t matter so much, wait for the token to expire....
I did exactly this on the last implementation of JWT I did. Common actions wouldn’t hit the database if the token was less than an hour old, but actions like changing email address or password would always check the database.
Usually in one of those scenarios, you will try to hold, say a fired employee for that long in order to deactivate all their accounts and access anyway... in reality it's not much of an increased risk...
You still could blacklist, but realistically most areas don't need a dedicated revocation check. Some critical areas might, depending on the space.
So where do you store the ones that should not allow to be refreshed? How short lived they should be in case of "Reset Password" scenario, when you need to kick out malicious user?
You can still do a redis or memcached, or even an rdbms table for that matter on revoked tokens... and do a lookup for critical systems... You don't need to do that on EVERY request though. It's really not less difficult than a session server/service/database, and more likely to scale better.
that's the whole point. That's the trade off you make, you DON'T implement those features. When you need that, it doesn't make sense to use JWT. Right tools for the right job...
But isn't the requirement for using the refresh token that it must be kept secure? For example, if you users authenticate in a browser, they get back an access token and a refresh token. If that refresh token were stolen on the wire or from within the browser, how would you prevent someone from using that refresh token perpetually if you're not blacklisting?
This is essentially an inverse blacklist (aka refresh token whitelist) which requires server-side state. The part that these things seem to miss in my mind, especially in situations where security requirements are high, is that the authentication process is equivalent to the refresh process.
For example, you can have a complicated authentication process where you require a password, 2FA, etc. These things help ensure that the user is that user. But once that process completes, it is replaced with access_token+refresh_token. Any user can take those item and impersonate that user. Attempts to lock this down require server-side state with the ability to revoke stolen tokens if detected.
Don't get me wrong, the same issues arise if you were using cookie-based session id's or the equivalent. But once you're doing this stateful token stuff, there appears to be a lot of additional complexity without a lot of additional benefit over the traditional way.
However, if you aren't worried about this level of security, there is clearly a benefit to using this newer style.
More power to you. I prefer to use the best tool for the job which in this case are session IDs since they are simpler, have been battle tested, and proven to work for over the last two decades.
I too love the best tool for the job, which is why blanket statements saying session IDs should always be used for authentication are very puzzling to me.
HTTPS, HMAC and asymmetric keys are battle tested and proven to work as well, that was one major point of the article.
I didn't say that they should "always" be used for authentication but that session IDs fulfills most web app user authentication needs. Most devs that implement JWT treat them as stateful which defeats the purpose of them. JWT has it's use cases when done correctly.
Session mechanisms are all too often different across every web stack though, so if you have a variety of services and applications behind a single logon, you need to solve an O(n) problem that JWT makes O(1).
There will be very few black listed tokens, and they are ephemeral. You can use memory replicated datasets, such as CRDTS or just broadcast the whole BL token list to all nodes.
if you have gotten to the point where you need to hit the database to verify the JWT, something is wrong. Either need to turn down your JWT expiry time, or refresh your JWT more often, or both.
That video is ridiculous. The whole time is spent talking about how cookies are superior to local storage which has little to do with JWT. You can use JWT and store it in a cookie. Session cookies are most certainly not automatically signed. Signing a session ID provides absolutely no value (signing claims, however, does). Revocation is exactly the same for both of them. JWT has a standard jti field for the session ID. I'm also not sure why you'd store all of a user's information in a JWT. You can just put in the minimal information to accomplish what you need.
If you cryptographically sign the session cookie, as suggested in the video, then you accomplish the exact same thing as a JWT token - so, then why use JWT at all, if you going to look up the session data from the database in any case.
JWT was meant to be stateless, if it's not, then it's just a layer of unnecessary complexity with potential security and implementation flaws.
JWT is basically a spec for how to sign the session cookie. Correct me if I'm wrong but there are 2 fundamentally different ways to do user session management: a) user has a random key that can be compared to stored key (DB, Redis, ...) b) signed session information, probably stored as cookie.
It's possible to add additional information in a JWT. And of course it's complexity that adds additional attack surface, but at least there is some kind of standardization around it.
My favorite use for JWT was actually on the backend for Frontend-to-service-to-service auth. It was actually a pretty natural way to flow the user context around without getting ugly with our API calling conventions.
Basically, Clients all used NTLM to talk to the main site, but the main site would use JWT to pass the authenticated user info to the other services being called. The signature ensured that you couldn't spoof, short of being an authorized user that could get an impersonate token for calling the APIs.
But the nice thing was it meant we didn't really have to hit the DB at all in any of this, and it was way cheaper to implement than an API gateway.
depends on if need to call other apis like microservices, you can use the JWT on behalf of the user to request the contents from other services. JWT also introduces `scope` which determine services user consented and allowed your backend to call. These things are not supported by a simple session cookie.
I mean they're not supported OOB but you're just describing a session cookie with some signed metadata. If "the ecosystem" and interoperability with existing services is the goal then has the advantage.
If you're talking about something bespoke then it probably doesn't.
Delegation via JWT replay downstream? Maybe, I guess, if those other services all have the same "aud(ience)" requirements, or don't bother checking audience. Probably not a design to hang one's hat on.
That's precisely the use case for JWT I recently had to work with, where cookies are irrelevant.
The web server gets a token from the API server, then prepares a few JSON messages that the web client will send asynchronously with JS. Since each message content is signed, the web client can't tamper with what is sent to the API. JWT was perfect for this 3-tiers messaging.
I mean is all this complexity really worth "I can send data to an untrusted client so that it can later send it back to me?" compared to just storing that data somewhere like Redis?
Then you have to provide a consistent view of the database across all server nodes, and the database updates need to propagate to all of your servers more quickly than the clients can issue requests. How complex is JWT compared to that?
Yeah I was hoping for a fair comparison but it seemed like a pretty big strawman. Like he just takes it for granted that "a session cookie is a cryptographically-signed identifier" but that's not remotely standard. At its most common (looking at you, JSESSIONID), simple form, the session cookie is a securely generated random number that is used as an index for state, and signing plays no part. The presenter then goes and talks about how cookies can be used to store other pieces of data in a stateless way, but it all branches from this premise that cookies are crypotgraphically signed, which isn't historically true.
Author here. Random identifiers and encoded objects are both widely used historically. Random cookies might have been more common 2 decades ago when every byte was expensive, but that was a while ago.
If you work mainly in Java for example, you'll more often see JSESSIONID which are random string identifiers, referring to a database containing active tokens and user profiles.
However if you work in Python, you'll more often see objects. Typically something like a user identifier + creation date + random bits, that is encrypted with a symmetric key. It's usually encrypted, not signed, so yet another thing than signed tokens and random cookies.
Sure Django and Flask use secure session cookies but that doesn't automatically make them all secure or signed/encrypted. Most cookies are plain text and there's no reason they need to be secure (they just contain user metadata not auth information)
JWT has many uses that have nothing to do with web browsers. The author doesn't mention anything about web browsers in the post. This line of JWT criticism needs to go away unless JWTs are being discussed specifically in a browser-based scenario.
JWTs are nice because the same authentication scheme can be used for applications and websites.
Basically a bunch of endpoints can be put up, and if they use JWTs, it is easy to hit those endpoints from any type of app.
Cookies can of course be used, but that requires pulling cookie jars into native code. Perfectly do-able, but also super awkward and potentially error prone. e.g. I remember using apps on Windows that required me to clear my Internet Explorer cookies if the native app's auth got into a broken state!
(Things aren't that bad anymore)
JWTs are also nice because I can easily write services that authenticate to each other. I can have a service running on my backend that authenticates its limited access service account, gets a JWT, and goes and talks to another service. Could I pass around cookies? Sure, but it'd be more work and more complicated than "attach this JSON blob".
Cookies are nice if everything is browser based, but I'd argue that isn't the best way to build services.
(And finally, the amount of time I've spent debugging JWT issues < the amount of time I've spent debugging cookie issues!)
There is no need to write to a cookie in server-to-server auth, just pass an auth token back in a custom header. No JWT required. Cookies are for offline users.
> just pass an auth token back in a custom header.
At that point why not just use JWT?
If my auth service provider already uses JWT (which it does), and all the platforms I am writing on have a provided library that consumes JWT (which they do), then why would I go with a custom header?
Also having uniformity of code patterns is nice.
My web service uses the same authentication scheme as my native apps. Heck my backend DB knows how to look JWT tokens and apply permissions correctly.
We use JWT for doing time and ip address limited cross domain redirection. We also use it for partners who want to provide sso access to our site with having to implement a full oAuth. They just provide us with their public key and use any of a number of libraries for to generate a JWT with the ip address and an short expiration and an email. Once we receive a JWT key and validate it using their public key (and other associated fields) we establish a standard cookie session.
I've written such a rant almost a year ago. [1] The article shows how to build a « RESTful » API secured with sessions implemented using regular cookies: simpler & without unnecessary complexity.
That's nice, and how it was done for decades. But I'm looking at JWT in a context where we have an application with a REST API, third parties paying us for licenses want to write frontends running on their own domains using to that API, and authentication servers are run by end user organizations that manage their own users.
Our API knows that that organization's auth server is allowed to sign tokens, the third party frontends can obtain those tokens and send them to our API, and it works (or so I hope, I'm in the reading up on all this stuff phase). Sessions using regular cookies just don't.
I don’t see any mention of cookies in that post except about an upcoming post. Does your framework provide the persistence on the client side for authentication, or does it rely on the client to maintain that token?
I have implemented JWT with not much java code and only supporting one encryption standard.
It was easily implmeneted, easy to understand, secure by design and not open to any of those security issues because there was no magic lib which would have allowed for some downgrade attack.
And what did it actually solve? Session stickyness. Simple and easy.
I’m always amused that with JWT, there never appears to be any separation between JWT-the-storage-format and JWT-what-I-do-with-it. JWT as a storage format is great indeed. If you pin the signing/encryption algorithm. Otherwise you shot yourself in the foot, which is bad, yes.
Everything else isn’t JWT. Sure you can use it with OpenID/OAuth/whatever. Sure you can store them in cookies. Sure you can use them with or without sessions. But how is any of that related to JWT specifically?
One of the articles says with JWT I have to re-implement session management. Just use a different framework then. Sessions with cookies are also not magic.
Another article basically says you don’t need OAuth 2.0 with access tokens and refresh tokens. Very true. Also not about JWT.
> If you pin the signing/encryption algorithm. Otherwise you shot yourself in the foot, which is bad, yes.
I recon if the library you're using doesn't force you to pin the algorithm (or opt out of pinning), your foot is probably already full of bullet holes.
JWT allows for the tokens to be signed using any of several algorithms, including none[1]. Pinning would restrict this to preferably just one, but at the very least should not allow unauthenticated tokens.
Your question is a good one, and "pinning" is not a very appropriate term here. Whitelist acceptable algorithms by issuer, ok fine--you have to have a library of acceptable public keys or shared secrets to go along with each of the allowed algorithms anyway. The JWT consuming infrastructure here on top of jose4j uses a registry of keyinfo metadata to allow for provision of multiple expirable keys and supported algorithms for each authorized issuer, and I'm not sure why anyone would do it other than that way, really.
Disallowing the "none" algorithm is an important implementation detail, like, maybe your JWT library demands an environment value "JWT_ALLOW_UNSAFE_SIGNING_I_KNOW_WHAT_I_AM_DOING=1" or just doesn't support it in the first place. Maybe it was foolish to ever make any allowance for this in the first place.
> Maybe it was foolish to ever make any allowance for this in the first place.
Since JWT is used as a wrapper around data, it is possible that your needs will vary based on deployment scenario. In some cases, that data may need to be encrypted, in other cases only signed.
In some cases I may not need integrity protection/repudiation and am already sending over a mutually authenticated/encrypted channel. The implementations may not want signing to be forced on them, impacting their compute/data budgets - but they still want to write their standards to depend on a single data format.
In general there should not be a whitelist, but a precisely known strategy going into evaluation. This is because:
- unlike TLS, you likely have a relationship with the issuers
- unlike browsers, the different components are being validated against the domain
For JWKS-based distribution, that usually means that a key identifier maps to a specific issuer and to a specific validation strategy. If you supported CA-based trust models, that specific validation strategy should come from the certificates.
You shouldn't look at the public key data and decide based on the algorithm inside the JWT how to interpret it.
Because as critics rightfully point out, without any whitelisting, you can just specify that your JWT does not have a signature and then it’s a valid token, whatever the contents.
I tend to just implement minimal JWT myself... auth server issues token, all services expect an authentication-bearer header with one. Also, pinning the algorithm and allowed keys is absolutely important.
I'm also not a fan of "sessions" other than at the client, they tend to fail at scale.
tptacek
Credential attenuation in Macaroons is
cryptographic; it's in how the tokens
are constructed. I don't see the opportunity
for a DoS (that didn't exist without
attenuation already).
Macaroons are a really lovely, tight,
purpose-built design that happens to
capture a lot of things you want out of
an API token, including some things that
JWTs don't express naturally despite
their kitchen-sink design.
JWT is more popular because there are
libraries for it in every language, and
people don't think of tokens as a cryptographic
design (or nobody would be using JWT!), they
think of them as a library ecosystem. JWT
is definitely the stronger library ecosystem!
This is also why I probably wouldn't ever
bother recommending PASETO. If you're sophisticated
enough to evaluate token formats based on their
intrinsic design, then you should implement
Macaroons if possible (it's almost always possible).
If you're not, then you're going to use JWT.
Macaroons have many small edge cases that'll bite you when you try to use them in practice:
- there is no spec and all people re-implement the de-facto standard. If you read the whitepaper it's not what's in use.
- the de-facto implementation is full of holes, e.g. time is expressed without timezone so it's not clear if it's UTC or not.
- the implementation requires custom parser for custom binary format but the caveats in wild use (remember: there are no standard ones) still use text so it just avoids the potential benefits of encoding dates and numbers in binary.
- the highly hyped third-party macaroons are barely supported in implementations in the wild - only one level is allowed and it's not specified anywhere.
- if we're talking about third-party macaroons there is another layer of problems: no standard for caveats means your third-party service needs to be closely coupled with your own.
JWTs have many problems but compared to Macaroons it's just JSON and base64. This is available in all programming languages with no additional cost. JWTs also have actual spec that implementations can agree on. Macaroons promise you extreme power but doesn't deliver. Several of Macaroons issues could be resolved with some effort (e.g. standarization) others - like resolving cycles in third-party caveats are IMO design flaws deeply embedded in the format.
As for tptacek's recommendation this only serves as a reminder that even if a highly respected internet figure recommends you something you still need to do your own homework instead of blindly following.
That's unlikely (last commit in the project is dated Apr 22, 2017). And it's another problem with "the Macaroons ecosystem". After initial hype people discover real world issues with Macaroons and abandon their pet projects.
I agree that people need to do more research than just reading a blog post.
But we are not on the same page about Macaroons and what makes them interesting. I do not care about interoperability and standardization (I do sometimes, but not here). Apart from things like OIDC, most of the JWT usage I see is internal to projects; they're used as a utility library to do utility crypto in HTTP APIs. In those scenarios, it doesn't matter whether "your" Macaroons are the same as mine.
What's interesting about Macaroons are the underlying design.
I'm honestly surprised to hear that anyone would go into a project with something like Macaroons and expect to fit into a pre-existing ecosystem of compatible Macaroon implementations, because, as the post says, they're not widely used.
I hear macaroons come up fairly often, but they suffer from being only described by a Google research paper, without a standard or other sufficient formal specification. My understanding is that compatibility across implementations is lacking.
PASETO is by Paragon, the authors of said searing indictment.
IMHO their argument really comes down to a difference in opinion in how cryptography should be supplied to developers. TLS and JWT standards allow for a wide variety of cryptographic algorithms, and implementations may provide various ways to negotiate that set of algorithms, such as whitelisted set.
This provides for migration over time from legacy systems to new algorithms, but creates a risk that the library author will have a security issue in their implementation of the standard, or that the application developer will misconfigure said implementation.
The alternative strategy is something like NaCL/libsodium http://nacl.cr.yp.to, where experts standardize on single packages of algorithms (or extremely limited set, such as one standard and one legacy) to implement specific cryptographic primitives.
The problem usually quoted here is one of compatibility, migration, and experimentation. There are often no provisions for older systems which cannot handle one of the profiles involved, or primitives for managing non-standard cryptographic sets. Many of these specifications also dictate removal of an old algorithm set to add a new one - making the specification only really valid in lock-step upgraded systems.
> I note that the logo depicts macarons [1], rather than macaroons [2].
Partly (largely?) my fault. When we wrote the Macaroons paper, I was simply not aware that Americans use the French word when referring to the French variety of macaroons.
Off topic: it gives me undue vexation that there are two dessert items with names so similar to each other that everyone keeps confusing them. Can we just all agree to come up with a new name for one of them?
To be fair, there is a similar situation with "cookie", which means any kind of small, flat, compact, unleavened flour-based sweet baked item in the USA, but more specifically a particular soft kind in the UK.
And conversely, "biscuit", which has the former meaning in the UK and means some sort of weird scone in the USA.
JWTs have made client side auth integrations look better. But the problem is that common security considerations and implementation details are generally overlooked.
1. Tokens are typically stored in localStorage. (app becomes vulnerable to CSRF & XSS attacks).
2. Tokens can be stolen. Now this is generally controlled by having a very short expiration time.
3. Short expiration times mean persisting refresh tokens to do a silent refresh.
4. Blacklisting of tokens adds complexity and defeats the purpose of decentralising the auth workflow.
5. There's technically no logout. It's all done via very short expiration times. With multiple tabs open, logging out on one tab needs to be synced with rest of the tabs via some event listeners.
6. SSR rendered pages need to send along the latest refresh token cookie so that the browser can use it.
7. The refresh token is sent by the auth server to the client as an HttpOnly cookie to prevent XSS/CSRF.
I think you're mistaken on point one, sites that use localStorage to store tokens are not in general susceptible to CSRF attacks [1]. The reason being that seperate domains can't access eachothers sessionStorage or localStorage in the browser. In fact that's one of the advantages of using the DOM storage APIs over sessions/cookies [2].
Well, yes, but it's not exactly the same. Session-based logout (or JWTs with blacklisting) automatically protect resources that haven't been fetched yet, but leave open the possibility of lingering previously-fetched resources. JWTs without a blacklist even leave open the possibility of fetching additional resources with a supposedly logged-out credential. That seems like a much bigger hole.
For me blacklisting is bad idea in general. It can be achieved without blacklisting downsides by using asymmetric keys per user. Where you could rotate keys after things like logout or password change. Keys might be stored in replicated storage, same as session, and deleted/rotated as needed.
Don't get me wrong, JWT it's not silver bullet nor it was meant to be one in first place. It's not session replacement, but there are places where right implementation makes lot of sense.
1. you can still use a cookie if you really want to, or have it in your application state in memory for PWA, though a browser refresh will kill it.
2. Same for any authentication header or token
3. I'm not sure I see the problem
4. See 3, don't do it, use shorter lived tokens with a refresh if necessary.
5, see 4
6. Again, you could still use cookies, and longer lived, or state/revokation backed store... I don't do many SSR in practice, mostly PWA
7. That is absolutely an option... usually, I forward back with the token on the hash, then the first thing the app does is use the history api to pull it out and remove it from visibility... it does appear for a brief moment, but like anything else, you'd see it in devtools anyway.
Fernet [0] was just as close to being a suitable and secure JWT replacement. But the specification wasn't really updated in a while so a simpler and another secure alternative to JWT and Fernet would be Branca [1] tokens that uses the same cryptography as PASETO v2.local [2].
Here's some trivia, the name comes from an italian drink from the 19th century named Fernet-Branca [3].
I've heard Fernet and Coke is the traditional "bartender's cocktail".
Had a sip of my fiancee's once, it tastes like someone mixed every soda from the soda fountain with every herb and spice in their spice rack, and then squeezed a healthy dollop of toothpaste in for good measure.
I'm impressed that you put your money where your mouth is and built & marketed an alternative: https://paseto.io/
Anyone here got experience using Paseto in anger? (besides CiPHPerCoder who made it)
I would love a JWT-like thing that's equally common yet better designed. But especially when using it in public APIs and the likes, acceptance has to be pretty broad. Anyone got insights as to how mainstream Paseto is getting?
Judging from the number of stars of the various git repositories for different languages, there are a few people using it but not a whole lot. The most popular implementation seems to be php based. That suggest to me it's still early days for this. E.g. the Java implementation only has 13 stars, which is not a lot. Also it has a native dependency, which is not ideal. E.g. JWT has a pure Java implementation from oauth0.
JWT has been out there for a few years and there are many uses of it that are fine. I've used it in the past and it was easy set up and get started with. The main criticism seems to be that users have too much wiggle room to do silly things like using alg=noneor that certain widely used algorithm combinations have some weaknesses. I guess that's valid but not a huge concern if you know what you are doing.
Paseto looks like it improves by narrowing down the choices to some sane choices, which is a valid approach. Of course IETF could update the relevant RFCs to use the same algorithms for JWT at some point.
With lots of "security best practice" crypto constructions you can quickly find yourself needing to use unreviewed and unsupported libraries. This is a different kind of security hell.
It's worst when you're publishing an API and you need to support web clients, plus android, IOS and ideally not make things super difficult for customers using say, erlang.
As someone who used to regularly recommend things like "Use a high level / simplified library - try keyczar! Or crypto_box! etc" it has been disheartening to see how almost any even slightly niche choice will quickly devolve to developers having to pull in a bunch of dodgy looking objective-c code written say, 3 years ago by a single unknown developer.
It's usually worked out better from a whole-system perspective to stick with very widely supported standards (flawed as they are) and maintain a checklist of known footguns. It makes me sad.
Of course if your crypto is opaque to clients / not part of your integration surface then you've got a lot more wiggle room.
Perhaps what I've taken the long-winded route to getting to is, for anything where interop is important, I believe that popularity is absolutely valuable due to network effects and momentum.
- database servers that have public IP addresses that are barely filtered
- boxes not getting updates
- under-secured Linux boxes with almost every option recompiled to on
- chmod 777 developers
- substituting signatures of checksums for signatures of data
- not using HMAC and opening themselves up to length-extension and chosen plaintext attacks
- storing SSL/SSH private keys unencrypted on unencrypted laptops
- downloading HR data to a laptop and leaving it
I actually was fired once from a big name university in the SF Bay Area for refusing to haphazardly ruin the network security of a credit card processing private campus network to facilitate a new vendor remoting into terminals.
Integrity through awareness/caution, processes and standard components.
Bringing in a bunch of random dependencies, regardless of license or support status, is inviting all sorts of gaping attack surfaces.
Popularity definitely is value when designing APIs for public consumption. JWT has a concrete edge here. This is why I asked HN about their opinions on Paseto, if there's a chance it'll overtake JWT/JWE in popularity, then that makes it more suitable for APIs.
In the context of security, popularity has the added benefit that there's enough eyeballs, so all bugs are shallow.
Author here. As a matter of fact, the world has already settled on JWT. Paseto is dead in the water and will never gain any traction.
Every single company has no choice but to support JWT in some capacity. Whenever one has to use social auth (Google/Facebook/twitter), or Microsoft products (ADFS/Office365), or third party authentication solutions (Okta/auth0), they're de facto dealing with OpenID Connect + JWT (or SAML but that's a different topic).
> Paseto is dead in the water and will never gain any traction.
We'll see about that. :)
> Every single company has no choice but to support JWT in some capacity.
This will change soon.
> Whenever one has to use social auth (Google/Facebook/twitter), or Microsoft products (ADFS/Office365), or third party authentication solutions (Okta/auth0), they're de facto dealing with OpenID Connect + JWT (or SAML but that's a different topic).
The plan for PASETO has always been to make it a JWT alternative for OIDC.
No, I was speaking even more generally. There are thousands of mainframes still running COBOL that process trillions/billions of dollars of transactions per day. There are obscure software packages and programming languages you've never heard of doing many important, niche tasks. So no, popularity will never equal value. There is no wisdom in a mob. The. End.
Popularity has value because other people contribute to maintenance. It’s not the only value, but it’s still a necessary minimum for most library selection in professional work.
Don't count on it. As it happens, I used to be a AWS consultant w/ a third-party preferred vendor. Their product direction involves more politics, job security and creating a labyrinthian platform that's difficult to emulate precisely.
That article doesn't contain a single logical argument.
>> JSON Web Tokens are Often Misused
So is everything else. Name one programming concept which isn't often misused.
>> There were two ways to attack a standards-compliant JWS library to achieve trivial token forgery
The keyword here is "were" - Just like how people in Europe "were" dying from the Bubonic plague - It doesn't mean that Europe is unsafe today.
The up-to-date reality is that JWT today has been battle-tested to an extent that few other web standards have. In a way, all the negative attention due to past issues has made it stronger.
>> JSON Web Encryption is a Foot-Gun... this is somewhat like pointing a gun with 5 out of 6 loaded chambers directly at your foot
...And using session IDs inside a cookie is like eating a cookie laced with cyanide.
> I can forge my servserside session id for session hijacking. This is what I understood.
Forge this. For each session:
session_id = bin2hex(random_bytes(32))
Yes, you can change what you send to the server. But you can't hijack another user's session in this probability space (2^-256) by blind guessing. Instead, you need another way to leak their credentials to hijack the session.
Everytime I see a headline with "JWT" in it, I get excited hoping that it is for "JWt" [1], the "Java Webtoolkit", which I love. It happens when I search for it as well, I look for "Jwt ..." and instead of the beloved toolkit, it comes up with all these json web tokens and HMACs. Aaah, well, I'll keep looking for that wonderful day when it really is the toolkit. I guess it goes without saying that, I recommend it highly.
* Why wrote your own format when JWT already has predefined keys. If you write your own encoding format instead of crappy JWT interoperability you have none and have to write everything from scratch
* If you're following API first using cookies for machine to machine API interactions is ridiculous (cookies are for browsers and humans)
* JWT being fairly standard plays nice with load balancer a/auth proxies/API gateways which can off load auth or even route it before hitting the application (database calls are expensive compared to in memory cached auth and you probably have an LB anyway)
This article is conflating the benefits of a particular way of doing implementation, and JWT as an implementation of that approach to authentication. That's dangerous because it n
discourages people from thinking carefully about the semantics involved. Authentication is a topic where the trade-offs should be carefully evaluated for your particular situation.
I do agree that if you need the particular way of doing authentication that JWT is designed for, JWT is indeed a great implementation and can save you a lot of time.
„9) Myth: JWT doesn’t support logout or invalidation. (It can with OpenID Connect)“
Iterating on how invalidation work with OpenID Connect when in a point before the author said an authentication service which can go down is a single point of failure you should avoid. So he added a spof by using openid connect...
It's all about trade offs. If you want full session management, not everything can be decentralized. People often say that JWT can't handle sessions at all so I am merely explaining that it actually can out-of-the-box and how to make it work.
Anyway, there is always a single point of failure somewhere. There's got to be something that authenticates users and creates tokens in the first place.
Like JSON? I think for some acronyms, JSON and JWT included, those are their "proper" names, with the expanded name being just a curious historical note.
Then at least link the first instance to the Wikipedia page or something. After reading the first couple of paragraphs I had no idea what this article was about.
I'd say it's good writing to link to the wiki entry for a less common concept like JWT in the first paragraph, right next to where they first mention it. It's definitely not ubiquitous like JSON, SQL, or FBI.
JWT is great for some use cases but if you need auth to be very centralized, just use one of the existing auth mechanism instead of bolting it on top of JWT. I don't see what would be the point of using JWT if you need highly centralized auth.
Where JWT shines is when the auth service does not need to know the clients that might want to authenticate using it. A system where it can issue tokens to any other service on behalf of a user and say, "here you go, you can use this for the next N minutes". This is very useful when it's not practical for every service/client to "register" itself with the auth service before hand like oauth.
JWT is not awesome. I spent yesterday implementing it. The smallest usable JWT I could create was 137 bytes, not including the Authorization header.
This is absurd -- the total amount of data I needed to store in the JWT was about 10 bytes.
This inefficiency bloats requests. At a time when we're migrating to http/2, which which deliberately reduces headers to speed things up, JWT is going in the other direction.
You may only need 10 bytes of info, but that JWT is a lot more than just a data blob. It's a signed set of user info. If you don't need that extra layer, sure, then drop to an opaque token. Complaining that a signed header is large, however, seems a little silly. It's also worth mentioning that HTTP/2 also does header compression which helps with this.
An organization I was at in the past attempted to use them as a replacement for sessions, which turned out to be a terrible idea as I suspected it would.
I've found that arbitrarily re-inventing the wheel because a new thing becomes popular should be done deliberately and with great caution. More generally - I think it's important to look for solutions to fit a specific problem, not problems to fit a specific solution.
However, back to JWTs - I'm currently using them for authorization in an EXTREMELY high traffic websocket server implementation. It's really nice because it's short duration (the ones I am issuing have a expiration of 60 seconds), and allows the service to operate entirely within memory except for interacting with a Kafka cluster.
Author. I wouldn't recommend to do less than 2-5 minutes. Some OpenID Connect providers actually ignore token expiry time silently when it's below a couple minutes.
Consider that host clocks are not always in sync (even NTP could leave 10 seconds of difference) and the many authentication redirections can take quite a bit of time for slow clients. Limiting tokens to 30 or even 60 seconds is asking for troubles.
But then again, I have to work with thousands of hosts, applications and datacenters, so I feel every edge cases. A single application on a single host would not.
While HN is full of Javascript enthusiasts who would never dare mentioning anything negative about the language making praise of JWT probably redudant, even if the token mechanism and the language are complete separate issues, I have to state that I also think JWTs to be helpful.
I mostly use them in IOT voice enabled devices that get their time limited authorization to access popular voice services through such a token. Voice enabled devices suck, but that is not the fault of JWT. I think without JWT being that common already, we wouldn't have a situation where a devices need to sign requests against voice services and we would have additional security concerns.
It is a given that you can use a complete different token or other cookie mechanisms that work just as well. But I like them to provide at least some common ground. Even if there is valid criticism about the implementation.
Authentication != authorization should always be mentioned on the topic of JWT. And yes, they are often abused to do things beyond their intended scope. I would think this to be a user error.
JWT always felt a bit strange to me. The fact that we pass user attributes back and fourth from the client feels more like evidence of flaws in the web as a platform than it seems like a real solution.
I agree JWT can be very useful, but its implementations are unfortunately all over the place in terms of what algorithms they support, especially lacking in the asymmetric space. Also the docs are pretty bad—spread out over multiple documents, with no explanation of the basic concepts, and they assume a lot of pre-existing domain knowledge.
And then you still have to use JWTs correctly which is very easy to screw up. OIDC has improved this situation somewhat, at the cost of another layer of even more complexity that’s easy to screw up.
Another Pro is that they can be read client-side, so the server and the client have an agreement on who the user is and what their attributes are (if they are defined in the JWT payload).
JWT works well. Securing API is one of its main use cases.
That being said. Please do NOT use basic auth for anything in 2020. This is the worst anti-pattern one could do for authentication.
Basic auth simply transmits the username and passwords in clear text with every request. No application should be receiving username and password in clear text besides a single auth service. The passwords will get leaked all over the place between developers debugging, verbose logs, exceptions, etc... And unlike tokens that are meaningless and expire, textual passwords last forever and are extensively re-used by user across websites.
I wouldn't call it a best-practice, but I've done this.
I guess my basic heuristic is that it's decent for an API that you expect to have very few consumers (internal, partnerships), but I would hesitate to recommend them for an API aiming for wide adoption.
- to revoke a JWT you have to blacklist it in the database so it still requires a database call to check if it's valid.
- JWT are to prevent database calls but a regular request will still hit the database anyway.
- JWT are very large payloads passed around in every request taking up more bandwidth.
- If user is banned or becomes restricted then it still requires database calls to check the state of user.
- JWT spends CPU cycles verifying signature on every request.
- JWTs just aren't good as session tokens which is how a lot of web developers try to use them as. Use a session ID instead.
Where JWT works best:
- when a client can interact with multiple services and each service doesn't need to do a network request to verify (ie federated protocols like OpenID). The client verifies the user's identity via the 3rd party.
- as a 1 time use token that's short lived, such as for downloading files where user gets a token requested from auth server and then sends it to the download server.