Hacker News new | comments | show | ask | jobs | submit login
Show HN: PAST, a secure alternative to JWT (github.com)
362 points by CiPHPerCoder 8 months ago | hide | past | web | favorite | 138 comments



This is better than JWT. In particular: the whole protocol is versioned, rather than serving as a metaprotocol that negotiates the underlying cryptography. If you speak "v2" of this protocol, and refuse to accept any other version, then you're getting a token that is basically just Nacl.

My only nit --- apart from the "auth enc seal sign" names, which aren't coherent --- is why do public key at all? Yes, Nacl supports it, but that doesn't mean the token format does. What's the use case for it? Who's asking for it? Specifically who? The overwhelming majority of JWT implementations I see aren't public key (except for the fact that the format is negotiated and might be tricked into being that).

Why not punt "seal" and "sign" into a "v3", when/if it's needed?


We use public-key JWTs so that the verifying servers do not have a copy of the secret key, just the authorisation server's public key. That prevents a compromise of the verifying servers from also compromising the entire system (yes, a compromised verifying server can be made to do anything it's allowed to do — but that's still less than everything).


This is the logic people use when they encrypt cookies to public keys. I've never tested a system with public key encrypted cookies that wasn't broken. As a general rule of thumb: public key is what you use when you have no other choice.

PAST mitigates the risk somewhat by hardcoding a public key system into its version. But I'm still recoiling from the kind of first-principles mixing of systems security and cryptography that happens when people find new reasons to deploy public key primitives.

Note that one of the first crypto bugs in JWT stemmed from how it used curve public key crypto.


I think you're conflating the 'seal' and 'sign' use cases. I'm not sure how one could design a secure system where encrypted cookies could either be read by everyone (encrypted with private key) or forged by everyone (encrypted with public key), but using public keys to validate that an attestation was signed by a trusted third party is both sound and commonplace.

For example, this is how identity tokens from AWS Cognito are verified. They are RS256 JWTs signed with Cognito's private key, and any server can download Cognito's public key to validate that the tokens were issued by Cognito and haven't been subsequently altered (without having to make a network call to Cognito to request validation).


On that note: https://github.com/paragonie/past/issues/13

I might be dropping "seal" entirely, as its utility isn't really a good fit for security tokens. (Sealing APIs can be very useful in other contexts, though.)


Was just going to post this, JWT with public keys is very handy for stateless verification of claims, which is what it seems was the primary motivator of the spec.

As someone who had to work with XML-DSIG, JWT seems less bad :)


You don't need public-key crypto to do stateless verification of claims. Secret-key crypto does that just fine. You need public-key crypto if you want _other people_ to be able to verify the claims -- that's an important distinction, because it specifies what you're buying for your much more complicated crypto.


Or stateless verification when "other people" are allowed to create claims. There are certainly federation scenarios where PKI alone could be sufficient (as opposed to complex handshakes such as OpenID Connect or SAML). Obviously in accepting those claims you get to treat every claim with a giant grain of salt, but there are auth models where that makes sense (the only claim I need to trust is the certificate itself scenarios, for example).

That said, very few people want such a security scheme and as theoretically simple as it is on paper, it quickly ramps up to the real world complexity of PKI key management.

(I could see a small use for web APIs for developers and power users that allowed, for instance, Keybase-verified claims like HN account name/FB name/email, for very simple and easy to curl/iwr/httpie from the command line using Keybase-managed keys.)

I'm not sure if that small window of opportunity is worth the support complexity in PAST here, but given the model of support only specific versions, having them as separate verbs (sign/seal) makes it easy to block claims you don't expect. The one flipside from an API design standpoint that I see is that leaves a need for some sort of header like X-Accept-PAST: v2.auth


I’m not sure I follow: if the other party is allowed to create claims too, symmetric crypto seems more obvious. That’s what NS (and later KRBv5) did in the seventies. Or is that precisely what you’re saying?


Kerberos vouching hinges on "ticket-granting server says...", and you know that because the tgt shares a secret with every player. On the face of it, it'd be much easier to just demand everyone know the tgt's public key (no need for N keys on the tgt for N participants).

I've long considered the merits of a kerberos-like system built on top of something like nacl... But without out-of-the-box support from all kinds of systems... It'd essentially build down to ssh+certificates with expiration dates... So i've gathered "better CA for ssh" is the better product. And there are thankfully a couple of projects in that vein (teleport, netflix/bless, others?).

[ed: i should add: I think tptacek is absolutely right about public key systems being easier to get wrong; but part of that is also the problem domain: look at the history of security issues with Kerberos (both implementations and protocol evolution) for a great example. On the face of it NxN key exchange is "text book simple; should be easy to define, a little tricky to scale". Then there's replay, clock drift, (de)serialisation, nounces, large number of session keys (secure random numbers)...]


In a stateless, negotiation-less flow where you have no pre-existing relationship with the other party? Isn't that exactly the bootstrap that PKI was built for?


Aha, I missed the stateless negotiation-less flow. But yes, that's basically TLS as deployed in browsers :)


Public keys solve the key management problem. If you have just one or two tiers it's not a problem.

If you have thousands, you end up spending a lot of time on key distribution if you need individually distributed secret keys.


You say that, but best practice for systems that have that option (say, internal SAML IdPs) still means you do per-peer keys with the annoying key management problem so that you get cryptographic binding instead of relying on a bunch of broken RPs to validate audience restrictions (spoiler: they don't) and in some cases IdPs or middleboxes that need to add audience restrictions (spoiler: they don't either).

What you get is that the peer can't forge tokens. But you're trying to authenticate to them; they already have full authority. So what are you fixing? (I'm not saying it's "nothing", but I am saying it's very little, and it's definitely plausible the increased risk isn't worth it.)

What do you mean by "tiers" here? The specificity of that word suggest you don't just mean "peers", but at the same time clearly symmetric systems win at nested delegation. (krb5, macaroons come to mind)


> What you get is that the peer can't forge tokens. But you're trying to authenticate to them; they already have full authority. So what are you fixing? (I'm not saying it's "nothing", but I am saying it's very little, and it's definitely plausible the increased risk isn't worth it.)

You get something you can show to a third party: 'see, the bank said their client was good for $10,000!' I can see where that might be useful.


I don't think I could've made my own point this elegantly. Eventually trying to disprove what the other party is saying is literally the opposite of what normal token schemes (say, SAML or OIDC JWT) is trying to accomplish: trying to establish what the other party is claiming.


This feels like grasping at straws.


I don't understand the conflation. 'tptacek is arguing that you preferentially use secret-key crypto to public-key crypto, because public-key crypto is significantly more precarious. It seems like you are raising a new point, not arguing with his.


tptacek was saying everyone should be cautious about PKI-based JWTs and provided encrypted cookies as an example of something that is rarely (if ever) implemented safely and securely with PKI cryptography. My point is that while the 'seal' function of PAST may be of questionable value, the 'sign' function is not. I think it's important to disambiguate those two uses of asymmetric cryptography.


Gotcha, thanks! That makes more sense now.

I agree that they're worth examining separately, but the more general point still stands that there is a lot more that can go wrong with sign than with aead. The arguments for broken RSA-encrypted cookies apply similarly to broken signed cookies. It's not clear to me for example that an invalid curve attack against seal is intrinsically worse than a nonce (ab|re)use bug against ECDSA.


Your points sound very interesting but don't provide any details. Could you provide more details, I'm interested in learning about this.

For instance is there a good resource you use on what are good alternatives when you think you need to use PKI when in fact there are better alternatives?

Is there a good resource on how to use JWT correctly that you trust?

(Edited-tone, was a bit more aggressive but was pointing out the opportunity to educate instead of just say things are bad).


JWS and JWE aren't as bad as JWT itself. You can use them without the JWT header.


I'm pretty comfortable with the makeup of the subset of HN readers that take me seriously and/or understand where I'm coming from, and so I'm going to avoid litigating with strangers on this one.


But that also means you're missing the opportunity to educate the rest of the HN readers who would like to understand -why- something is bad (like me because we're rolling out public key JWTs). Even just a link to a blog post would be better than just "because I'm me and people agree".


Here:

https://storify.com/jcuid/thomas-h-ptacek-don-t-use-json-web...

https://news.ycombinator.com/item?id=13866883

https://kev.inburke.com/kevin/things-to-use-instead-of-jwt/

or go with the flow:

https://www.google.gr/search?q=ptacek+jwt&oq=ptacek+jwt&aqs=...

He has countless comments preaching against JWT and DNSSEC.

Mind you, I know the nickname, I know he is good with sec (I'd hire him to vet an app) and all but I don't know him IRL or ever heard of him before joining HN ... that's to say that he spent a lot of digital ink discussing these two topics here. The subset he's talking about, should amount for more than 50% of the regulars I guess.


The HN link doesn't explain why public key JWTs are bad; it's just more "don't use JWTs" (with the exceptionally unhelpful "as engineers, you need to understand first and foremost that JWT is bad". How can we understand if all we get is "JWT is bad because I say it is"?)

The Storify link is slightly more useful but again doesn't really explain anything - it's just bullet points of "I think these things are bad in a standard" which is great if you already have the knowledge of those things. Utterly useless otherwise.

And if "implementation errors should lower your opinion of a specification" means we shouldn't use JWT, where does that leave us on WIFI, SSL, HTTP, speculative execution in pipelines, ...?


To me, this rebuttal reads, "I don't understand it, ergo it must be dubious".

I'm criticizing JWT. Not you, or any other developer. If you don't understand or follow or agree with my criticism, that's OK. We can still live our lives.


> To me, this rebuttal reads, "I don't understand it, ergo it must be dubious".

That is not intended - it's more "I don't understand it, I'm happy to believe it's broken but all we get is Appeal To (self)Authority as to why rather than actual usefulness."

> If you don't understand or follow or agree with my criticism, that's OK. We can still live our lives.

Well, sure, but wouldn't you prefer to help people who don't understand by actually explaining something for once?


So I trust you completely here - but I lack the crypto chops to understand why that’s the case, and I’m interested in understanding.

Do you have any good recommended primers that’d help me get it?

Or should I just get off my arse and finally do cryptopals? ;-)


Having started it but to my shame never managed to finish -- children having got in the way -- I wholeheartedly recommend the latter approach.


I would recommend doing cryptopals in any case. It will give you a good background on this and other crypto topics so you can evaluate for yourself.


I'd still love to know the details.


My speculative interpretation (which Thomas may disavow completely) is that not that eadmund was doing something technically broken, as much as adding complexity, and risk of mistakes somewhere along the chain, without benefits to justify doing so.

If you listen closely, some security wisdom could applies to a lot of specialties. For ex this quote is about authenticated encryption, but it's not really bad advice if you change the subject from encryption, to rolling your own web server:

"Authenticated encryption is something you should use as a complete package, implemented as a single unit by a well-reputed open source cryptographic library and not assembled piecemeal by people who do not specialize in cryptography."

So what's the difference then? The stakes are higher. Serve the wrong web page, you'll be working on a bug. Be careless with security primitives, you could make a fortune 500 stock price go down, or make your CEO have to issue a press release, which tends to not to reflect well on your performance review.

In this light, "finding new reasons" I would infer to mean deviating from established best practices in any way without a damn good reason, even then without that reason being challenged and vetted.

All of the creativity in security should be in the research. For production implementations I don't want any creativity. Ideally, I'd want prod to be the opposite of research: Conservative and uncreative, struggling to stay awake because it's so boring and uneventful.


Sure, I get all that, I want the technical details though. Or pointers to where I can learn more. His advice says what not to do but not really what to do or where to learn more.


(Hi I'm not 'tptacek, I work with him but don't speak for him)

If you just want a few examples of stuff that can go wrong that doesn't go wrong if you just use a stored token instead of crypto, or doesn't go wrong if at least you use symmetric crypto instead of asymmetric crypto:

- Some bugs are about negotiation, e.g. key material misuse between RSA and HMAC schemes in JWT.

- Some bugs are about cryptographic implementation, such as not reusing nonces for ECDSA or RSA padding oracles or RSA keygen vulns (see Infineon bugs a few months ago). These are almost exclusively in the asymmetric crypto camp.

- Some bugs are about specification issues, such as non-mandatory aud (audience) and exp (expiry). You don't have specification issues if your tokens are opaque random numbers. You have significantly less bad/fewer specification issues if your tokens are versioned and you don't allow negotiation.


If you're open to questions, I'd be curious for your take on token binding as well, as that seems to be in the same ballpark (or same sport at least).


This is actually pretty common and I don't think there is a better way to do it. I am surprised you haven't encountered these systems in cloud environments, particularly where multiple organizations (or just untrusted networks) are involved. Or maybe I'm misunderstanding your criticism.


How do you support multiple keys and key rotation without using RS256?


> As a general rule of thumb: public key is what you use when you have no other choice.

I think you mean, "private" key is what you use when you have no other choice.


No: he's using public key as in public-key cryptography, as opposed to secret-key cryptography.


I think "public key" as in "asymmetric cryptography with public key."


No, that's not what I mean.


Also, issuing servers and verifying servers don't even need to be part of the same organization, allowing you to outsource credential management (see Auth0, Firebase Authentication).


You can do that fine with secret-key cryptography too.


By downloading a shared key over TLS rather than the provider's public key?

No difference from the perspective of the token consumer. From the perspective of they token generator, it means rotating per-tenant keys rather than a single keypair.


I addressed this elsewhere (https://news.ycombinator.com/item?id=16072690) but to quickly recap: that's not the hard problem, and hardened SAML IdPs that have the option of exploiting this turn out to have per-tenant keys anyway so that they can get cryptographic binding instead of counting on audience restrictions being checked.

Additionally, your TLS terminating stack is much better hardened than median in-app crypto code.


We do exactly the same, it's a great feature to have in JWT's.


> My only nit --- apart from the "auth enc seal sign" names, which aren't coherent --- is why do public key at all?

> Why not punt "seal" and "sign" into a "v3", when/if it's needed?

Would you be happier seeing something like this?

  - v1: HMAC-SHA2, AES-CTR+HMAC-SHA2
  - v2: RSA and all its sins
  - v3: Libsodium crypto_aead_*
  - v4: Libsodium crypto_{box,sign}_*
> What's the use case for it? Who's asking for it? Specifically who?

The only use-case I'm aware of as of this morning is OAuth2 users who currently use JWT for access tokens. https://bshaffer.github.io/oauth2-server-php-docs/overview/j...


I would happily do away with v1 completely. In my unfortunate experience you're still giving developers enough rope to hang themselves with by choosing older ciphers just because they are well-known.


To extend on that, I can give list the lessons I learned from actually running a similar scheme internally in production for a while now:

1. Give developers the minimum amount of knobs. If they need to choose between 'auth', 'enc', 'seal' and 'sign' it's still going to be confusing. In my case they can personally come to me and ask "Which version should I use? Which type should I use?", but it's not clear.

I'm still undecided whether supporting an unencrypted authenticated token is useful, but 'enc' should ideally be the default, with users going a little bit more out of their way to do anything else.

2. Asymmetrically signed payloads with an expiry are this strange bird that looks like a duck, quacks like a duck but unfortunately _are not a duck_. They're certainly useful, but when I was calling them 'tokens', my users were confused whether they should use them as access tokens.

It's great to have a simple and robust encoding format for packaging payloads with an Ed25519 signature, but it's better to clearly call it something other than 'token', or you'll end up with users deciding they should use asymmetric access tokens just because it makes the key more secure.

3. You want to have some mechanism to specify key ID or key version in the wrapper, because this allows you to do automatic key rotation gracefully. You can use the optional (AEAD) payload for that, but that wouldn't be entirely clear to the users how.

Let's say you want to rotate the key every week, but your longest token has a TTL of 24 hours: you will need to have a window where you would support two different encryption keys. In other rotation scenarios (e.g. rotating every 24 hours with longest token TTL being 30 days) you can have many more encryption keys supported concurrently. You can iterate and try all possible keys of course, but having key ID/version is much cleaner.


> In particular: the whole protocol is versioned, rather than serving as a metaprotocol that negotiates the underlying cryptography.

In my view, that's one of the big takeaways from early cryptographic protocols: complex handshake negotiations just won't be secure, so just don't. The other is that crypto code should ideally not contain any parsing at all.

e: Well that and the whole debacle on the level of primitives and operation modes of course.


really? both Azure AD and AWS Cognito use RS256 as the only algorithm supported. perhaps my sample size is small.


Google OAuth 2 tokens are also RS256 JWTs


The requirements and budget of Google, Amazon and Microsoft are very different from a median start-up.

A simple closely related field: OAuth 2 token replay attacks. I auth against A with Facebook, A uses token to impersonate me against B. ISTR Google had basically the same bug. A median startup will not find that bug. Storing a random token in a database? Very likely they won't mess that one up. Also, if you do (let's say your randomness generator is MT as opposed to a CSPRNG), it's easy to fix, because you control the validation endpoint.


It is not a bug with Google but instead a problem with "B" as they choose to ignore the "aud" part of the token.

You can't say password based authentication is bad because some developers choose to store password in plain text. The blame squarely lies with the developer.

People implementing auth without willing to go a little deeper may hurt themselves.


This is great. Having versions instead of kitchen sinks, and having those versions get rid of the footguns, is exactly what fixes the cryptographic JWT perils.

Note: you probably still just want a random key in a database. And revocation is still an issue. But if you’re absolutely sure you want to mint tokens...


What is a use case for minting tokens and one for not minting?

I've seen this referenced a few times before but I don't understand why minting tokens is bad.

Also, isn't creating a random key as a token the same as minting one? or what is the correct context here for "minting tokens"?

Thanks!


When I say "minting a token", I mean taking some data (e.g. your user name), adding some metadata (audience, expiry), and performing some operation on it (signing, encryption, authenticating) such that a different party will accept it as-is. (This is like "minting" because if you have a plausible-looking coin people would just accept it without having to go ask the issuer if it's a real coin via serial number or whatever.)

When you generate a random key, you have to go ask a trusted third party what the data associated with that key is (and, therefore, if it's still valid).

Minting tokens is bad because it's drastically more complicated, has tons of failure modes, and most of the time you end up doing tons more database transactions anyway that are way more complicated than set membership (i.e. is this still a valid token?).


I see. It makes sense.

In this context then, what are your thoughts on minting tokens using Macaroons [0]? I ask because they seem to be of much lower complexity than JWTs since they just hash the data so no encryption algorithm negotiation or anything of the like, however they still meet your definition of minting a token.

Obviously it would depend on a case-by-case basis to prefer macaroons over say a random token or vice versa, but in general is chaining HMACS still considered "drastically more complicated"?

My uninformed opinion is that verifying a hash shouldn't be that bad but my security expertise is pretty much non-existent so it would be great to be schooled on this :)

[0] - https://research.google.com/pubs/pub41892.html


If you're going to mint a token, macaroons are a great spec. But when designing a critical security system, the default should be the simplest possible thing, and DB lookup is still much simpler than macaroons. So, you have to have a big problem that macaroons solve first. In my experience, the cure is worse than the disease here.


Thanks a lot for the insight.

It's amazing how seemingly simple things can get so complicated when talking about security.


I did extensive research on various token types including Macaroons. While Macaroons look simple on the surface there are numerous edge cases that the verifier should take care of (Macaroons form a DAG because you can have multiple ones referencing each other). Plus there is symmetric decryption going on in case of third-party caveats. The caveat system can be very powerful (as caveats are just byte arrays you can use any system to encode claims). But there is no standard to encode them and that causes tight coupling between systems using Macaroons. I can go into more detail if that's interesting for someone.


I've read about Macaroons and the lack of a standard to encode caveats (mainly from the google group about macaroons) but I'm interested in your take, so more details would be great!


If you compare it to JWT in JWT claims are basically a string name (like "exp") and a JSON value. It's pretty simple. You need to check of the value for exp is a number and is greater than current millis. In Macaroons the entire caveat (a.k.a. claim) is just a byte array. The majority of libraries use predicates "X op Y" encoded in UTF-8 (e.g. time < xyz, account = 12345). That looks good, in JWT you don't have relation encoded, you just need to know that for exp it's "less than" and for aud it's equals.

Unfortunately that's where the simplicity of Macaroons end because even the official docs on libmacaroons have some weird choices, like encoding date in a ISO-like format (without timezone specifier at all). Also using UTF-8 is not that efficient if one could encode date as a number. Of course Macaroons are flexible and because the caveats are byte arrays you can use CBOR or whatever to conserve space. But then you lose compatibility. And you need compatibility if you want to utilize third party Macaroons (if a third party mints you a Macaroon with unknown caveat that makes the entire thing invalid). Third party Macaroons are the most powerful feature of the entire system but they introduce a lot of complexity: Macaroon references (look out for loops!), symmetric decryption and out of band communication needed to collect everything needed to authorize. That's why existing libraries allow only limited subset (e.g. no third party caveats referencing other third party Macaroons), and if you're writing a library (as I did) there is a lot of reverse engineering (e.g. libsodium is used in most libraries for symmetric encryption).

There are also some small things like de facto implementations using slightly different operations than the paper and a custom binary format. JWT just uses JSON and base64. (For the record I'm not a big fan of JWT).

What I like in Macaroons is the ability to further confine permissions. Sadly PAST doesn't seem to have something like this instead opting for a "safe JWT" way.


Awesome, thanks for the details.

One of the issues that I saw being discussed on the forums was precisely the lack of a standard format for caveats and even some suggestions to create one.

I think it's a very cool project but for some reason I don't think it has caught on. Do you think if someone came up with a suitable standard built it would see broader use? or what is your take on the lack of interest (IMHO) for this awesome idea?

Edit: Any way I could contact you to discuss this a bit more, if you are open to it? :)


Macaroons have many moving pieces, caveats being just one of them. Standardization would have to approach it from all edges. For example the binary format while simple is also "de facto" standard resembling protobuf but not exactly. Maybe CBOR would be a good idea (or a subset of it)?

As for caveats there are two forces at place: some want simple format ("X op Y") but there is another way that I've explored in a PoC - use simple stack-based script system (similar to Bitcoin Script [0]), then you can encode some really interesting properties inside your tokens (like requiring hashes of different properties, or signatures) so it would be kind of a meta-authorization scheme where you can delay the decision (e.g. require EC signatures for some sensitive operations). Of course this brings additional complexity but on the other hand using caveats in "X op Y" form doesn't bring any benefits over JWT claims.

[0]: https://en.bitcoin.it/wiki/Script

The ultimate reason why I abandoned Macaroons (after working for some prototypes and creating a JS library for them) is just the amount of complexity needed to work with them. And remember - code working with Macaroons is being executed before the request is authorized (that's what they are for) so this code need to be carefully audited and any bug can have severe consequences.

Compare that with JWT, you can write a verifier in simple code (JSON and base64 are built-in in any language) and simple is easier to audit. In Macaroons you first need to decode base64, parse custom binary format, check consistency (no cycles in third-party caveats), decrypt third-party keys. Moreover there can be multiple Macaroons for given ID, you need to check if at least one satisfies the request. Better - check all of them and then see if at least one works (to protect against side-channel attacks). So there is some inherent complexity in the entire stack. Removing it would require some substantial work. IMHO that's why they are not widely used. Oh, did I mention the existing libraries have some rather significant issues [1]?

[1]: https://github.com/nitram509/macaroons.js/blob/master/src/ma...

So to answer your question: I would gladly see some standarization effort, but it needs to be really thorough to have good effect. Unfortunately Macaroons are already plagued by old cruft (e.g. third-party caveats ID are called "cid" and first party caveats are also called "cid" but they are completely different) that no-one wants to touch not to break existing code.

If you don't mind we can keep the discussion here, it's good for others to see (I got into Macaroons because of one of these threads) and it's google-able :)


Just a small 'way forward' for those wanting to have the ability to revoke a JWT after I came up with a solution on my last project: A 'Gateway' - use OpenResty to verify the JWT ID stored in a redis cache using a proxy pass. When the Authentication service grants a JWT add its ID to this cache along with a way of identifying the user. That way the entire advantage/disadvantage of decentralised authentication is not fully weakened and OpenResty + Redis can be relatively fast.


Forgive my ignorance, but if you are going to all the trouble of storing the JWT id in a server side database for verification why don't you just store the JWTs' claims as well and just hand the client an opaque random id? Your gateway could do the lookup and supply the claims to your backend without the client knowing anything about it. You wouldn't even have to validate the claims, since they never leave your servers.

The best part of using JWTs is that you can validate without a central database. If you need a central database for sessions anyway you might as well store the claims in it.


Good solution! Openresty could create the JWT and add it to the forwarding headers, although the user would still need a cookie to maintain session with the proxy.

Only disadvantage I could see would be performance.


Congratulations, you’ve just invented session tokens. Don’t get me wrong: stateless tokens (like JWT) are terrible in part because they’re irrevocable, but then why bother reimplementing session tokens with JWT? Just use plain old session tokens instead.


They are not irrevocable - just harder to revoke.

If you want something that avoids storing all tokens, you can use a blacklist. You don't even have to check for revocation on any call - you could perfectly use short-lived (say 10 minute) access tokens and force frequent refresh using a refresh token and then only check the refresh token calls.

Whether you want to use it or not is a matter of making the right trade-offs. Stateful tokens are simpler to implement on the surface, but you have to keep in mind that the database lookup itself could be vulnerable to timing attacks.

Unfortunately, most database-based token implementations I've seen perform lookup based on the token string, instead of looking up the user and then checking all of the user's tokens, one by one.

And if you're not a small startup and actually have to handle loads north of 10,000 QPS (some of us do), these stateful tokens become quite expensive.


You blacklist your tokens in a cache and that's all.


So, for a secure system, the blacklist cache/service becomes a single point of failure (see also certificate revocation lists, ssl/tls).

I personally think renewable, short-lived tickets/tokens are a better evil - accept that a compromised session is valid for 10 minutes (5+worst-case/accepted clock drift).

A long-lived certificate can encode authorization + but needs a short-lived ticket to be valid ("an I'd card that says three star general and today's pass phrase").


> you probably still just want a random key in a database.

In general, I think that this is the wrong approach, because that means adding a database round-trip (which in a large system is almost certainly a network round-trip) for each and every API call. Notably, if the entire system is secured in depth (which large systems should be), it means adding a network round-trip for every layer of the API (e.g. one for the frontend server to validate the user, then another for a backend server to validate the frontend server). Using a stateful token[0] replaces that network round trip with a public-key or hash verification, which is much faster.

> revocation is still an issue

I think that generally revocation concerns are a bit overblown. Even with a database lookup on every request, there is a (small) amount of time that one is willing to act on out-of-date information (after all, the user's access could be disabled immediately after the lookup, before the action is performed). Almost every business has some window in which it is willing to act on old information: better to make it explicit than to leave it implicit.

I really like the long-lived stateless token-refresh token[1] and relatively short-lived stateful access token approach often seen in OAuth2 setups. E.g. an email provider might only bother refreshing a read-email access token every five minutes or so, but might wish to refresh a delete-email or send-email token more frequently (or require a database lookup each time).

[0] When I write 'stateful token,' I mean a token which is full of state; confusingly, some folks calls this a 'stateless token,' because the relying systems do not need to store or consult state.

[1] When I write 'stateless token,' I mean a token which carries no state and thus must be looked up in some form of database — a random key in a database would suffice.


> In general, I think that this is the wrong approach, because that means adding a database round-trip (which in a large system is almost certainly a network round-trip) for each and every API call.

In general, asking a database for set membership is not close to the slowest thing applications do.

> Notably, if the entire system is secured in depth (which large systems should be), it means adding a network round-trip for every layer of the API (e.g. one for the frontend server to validate the user, then another for a backend server to validate the frontend server). Using a stateful token[0] replaces that network round trip with a public-key or hash verification, which is much faster.

A handful of those set membership checks: still not the slowest thing applications do. (Also, I don't think it's a given that internal auth has to work the same way external auth does, for a bunch of reasons.)

> I think that generally revocation concerns are a bit overblown.

Without revocation, you can't log out of things, and you can't invalidate sessions after credential rotation. So unless you're saying tokens should be valid for some tiny number of seconds... in which case, it's questionable that you're buying a lot of performance for your drastically more complicated token scheme.

> Even with a database lookup on every request, there is a (small) amount of time that one is willing to act on out-of-date information (after all, the user's access could be disabled immediately after the lookup, before the action is performed). Almost every business has some window in which it is willing to act on old information: better to make it explicit than to leave it implicit.

Median response time is not comparable to usual token expiry times. If you do make them comparable, then you're making that DB check every time anyway. The entire point of the token is to do that less. You're right that it's implicit, but it's also "the fastest that it possibly ever could be", so that's not a bad kind of implicit.

If you really desperately want to make the check faster, why is JWT better than a local token cache?


> In general, asking a database for set membership is not close to the slowest thing applications do.

From Jeff Dean's list of numbers every programmer should know[0], a round trip within the same datacenter takes on the order of 500,000 ns; a main memory reference is on the order of 100 ns. How often will an application be making an order of magnitude more than 5,000 main memory references to service a request? Sometimes, sure.

In our experience online validation completely dominates our workloads.

> I don't think it's a given that internal auth has to work the same way external auth does, for a bunch of reasons.

Oh, you're completely correct. It has upsides & downsides.

> Without revocation, you can't log out of things, and you can't invalidate sessions after credential rotation.

I pointed out that I like the use of stateless tokens, which can be revoked. In many cases 'log out' can just be destruction of the token. And in many cases it doesn't make sense to invalidate access just because credentials have rotated (in many more cases it does, which of course is perfectly supportable with stateless tokens).

> So unless you're saying tokens should be valid for some tiny number of seconds.

I'm not: I'm saying that in many cases there's no business need to assure revocation within $SMALLNUM seconds, and that the business costs of online validation utterly dominate the costs of running a service.

> If you really desperately want to make the check faster, why is JWT better than a local token cache?

There are only two hard things in computer science … cache invalidation is one of them.

[1] https://gist.github.com/jboner/2841832


> From Jeff Dean's list of numbers every programmer should know[0], a round trip within the same datacenter takes on the order of 500,000 ns; a main memory reference is on the order of 100 ns. How often will an application be making an order of magnitude more than 5,000 main memory references to service a request? Sometimes, sure.

That argument is only valid if your application doesn't touch disk and doesn't touch the network to go talk to some other service anyway.

Do you have a concrete description of what "online validation" specifically means for you, and how long it takes? How long does validating the token take instead?


Just in case someone else reads this thread later, the point I was going to make:

- DC roundtrip: 0.5ms (per GP's own numbers)

- P256 ECDSA signature validation: 2ms [0]

[0]: https://www.cryptopp.com/benchmarks.html


> That argument is only valid if your application doesn't touch disk and doesn't touch the network to go talk to some other service anyway.

Why? Adding another source of “slowness”[1] isn’t free just because other sources of “slowness” already exist, especially if it’s already close to being unacceptably slow as it is.

I mean, sure, if the request takes ten minutes anyway and the validation check takes a few seconds, nobody will notice, but if I’m doing, say, a single non-local database lookup for the request then adding a second one for verification doubles the time it takes to service the request.

[1] I’m not saying that a network round trip and database lookup are slow, just that for the sake of this argument, they are being considered slow.


Do you have specific performance data for a comparable token validation?


I’m not really arguing against your point, just pointing out that your statement isn’t necessarily true. You said that if you do any network or disk access then validation will be negligible as it would be dominated by that. I’m simply saying that while this is probably true in most real world cases, it may not be so if for example both validation and normal request handling do a simple database query, then validation is 50% of the request time (or 25% if the request takes twice as long etc). Since the person you were relying to above said something about performance sensitivity, this overhead could be too much.

Having said that, I do believe that what you’re saying is the right approach for 99.9% of use cases and I would imagine that in almost all cases the performance hit really is negligible.


I gave specific examples of relative latency going the other direction in GP thread. But yes: not only is it not actually slow, it’s a much simpler engineering exercise to make it fast (see caching argument, same thread).

Is it literally definitionally impossible that minted tokens have a useful application? No, of course not. But absent very specific cases I’m going to argue for the thing that’s safe, fast & simple :-D


> There are only two hard things in computer science … cache invalidation is one of them.

I mean, it's a pithy saying, but TTL invalidation is a problem you have to solve with JWT too. Caching tokens is the easiest possible cache invalidation problem: definitionally, you know a priori exactly when the token is invalid.


I really don't get the whole "the spec supports choosing of algorithm, therefore the whole implementation is bad"

If my server-side application sends a JWT with a "good" algorithm, and disallows any other alg's, wouldn't that prevent attacks?

Why do we need a whole new implementation?


The problem is that it's not idiot-proof. If someone doesn't understand why the algorithm is important, they might choose none. It's easy to say "they shouldn't be using JWTs if they don't know how to use them", but everyone starts somewhere, and everyone puts stupid bugs into production.

JWT is safe, as long as it's setup correctly, but safe-by-default is a better option.

That being said, I'm not going to swap out my JWTs with PASTs. I know what algorithm I'm using, why I'm using it, it is safe, and I'm verifying them properly.


I am totally with you. For me JWT is the idea to store encrypted values client-side which you can use to authenticate the client and is different to the old way of just storing a random 'dumb' session id with the linked values somewhere on the server.

So yes, I appreciate secure and easy-to-use implementations of that idea, but always going for how insecure JWTs are, just because there are easy ways to do it wrong, is like telling everybody that password logins are bad because some people implemented them in flash.

> Why do we need a whole new implementation?

I think the problem which many people have with JWT is that is not strict enough and does not define certain things. PAST uses the same idea, but does not allow that wide variety of different algorithms we could use with JWT. In my eyes it makes it easier to use, because you do not have to search for implementations with compatible and secure algorithms but just for implementations using the latest version of PAST.


> If my server-side application sends a JWT with a "good" algorithm, and disallows any other alg's, wouldn't that prevent attacks?

It does not. Unless you have specifically hardened your server to refuse to even try verifying tokens which use "bad" algorithms, a client can still present a key signed with one of those algorithms, and attempting to verify it may pose a risk.


But my code can just ignore the alg passed by the user and use whatever one I want to use.


If you're going to ignore compatibility anyway, you could also use good crypto. The JWT doesn't define what "Recommended" and "Recommended+" means but apparently RSA+PKCSv15 is Recommended+ so I guess it means "crypto known to be broken in the 90s".


I think one of the issues is a very practical one. When everything is called JWT, it's hard for users to figure out whether it's a secure or insecure implementation.

It's completely possible to do secure things with JWT, especially if you control every producer and consumer, but it's not guaranteed.


It's completely possible to do insecure things with PAST.

There is always going to be someone who hardcodes the root password into a public GitHub repo.

The problem with JWT is IMO largely that things became too easy, so people coded without thinking. I'm not sure that's easy to prevent. I don't blame the JWT spec for mistakes that are so obvious.


There's also the issue that the alg field is in an encoded section of the JWT payload and has to be base64-decoded and then JSON parsed. There have been buffer overrun and malicious JSON attacks on JWT.

PAST at least moves that to a clear text prefix in the vx.scheme pattern. Theoretically, that doesn't even stop it from having a v0.none or some other dumb algorithm such as JWT allows in a bad version suite, but it does at least mitigate against decryption attacks.


I have implemented JWT on my app but the library I used (Guardian, written in Elixir), only allows you to use HMAC-SHA512 by default. And I've left it that way.

Should I still be worried? I get that JWT's algorithm flexibility is overall a bad thing, but if I only allow one, should I continue to worry?


I haven't reviewed said library, so I'm taking your word for it that it's actually limited to that suite :-) The problems with JWT are more complicated than just negotiation, but you should be OK here.

Here's why:

- Some bugs are about negotiation, e.g. key material misuse between RSA and HMAC schemes. They don't affect you, because you don't negotiate.

- Some bugs are about cryptographic implementation, such as not reusing nonces for ECDSA. They don't affect you, because _HMAC-SHA256_ doesn't have most of these problems.

- Some bugs are about specification issues, such as non-mandatory aud (audience) and exp (expiry). Audience shouldn't be a problem for you, because the only audience is you and there's only one secret key, so you get automatic audience restriction via cryptographic binding. Expiry, well, that's on you.

Why did you use JWT to begin with? (What does minting tokens buy you?)


Just to clarify, it isn't limited to that suite, but it has a default whitelist of only allowing it. So I COULD change it, but I didn't.

"Why did you use JWT to begin with? (What does minting tokens buy you?)"

Absolutely no compelling reason apart from:

- Never want to roll my own auth, and there were already libraries in elixir and ember to work with JWT, so easy to cobble together

- The HS512 stuff seemed secure enough

- Impression that JWT is generally where things are headed (although I made this decision 2 years ago)

- I can embed some attributes in the token that can be read from the client side

- I liked the idea of authenticating without using a database query.

A lot of these things haven't borne out in the last two years. I still make database queries during auth, and I still retrieve user metadata from an API in my client side. So just to clarify, I'm not attached to it in some way (I never am, to technical decisions). I mainly want to know if I should prioritize moving away from it or not.


Sorry, I didn't mean to come across as bitey. It's just that almost everyone I've talked to who has made this decision and lived with it for over a year has come to the same conclusion: just use a random key and store it in a database :)


Yeah no worries. I sort of agree with you. I don't have any strong opinions on why I chose it apart from that it was easy to implement.


Simpler is almost always better.


Guys, let me compliment you on some nice work here. Maybe there's still a little spit and polish needed, but the important point is it moves the ball forward.

Sometimes there's so much passion here around implementing cool new things that, while those things are important, it can make it easier to forget that simplicity, and better outcomes on average, are not in any way lesser technical advancements. Simplicity also requires good taste to boot imo.

On a side note from one of your posts, doesn't it make you chuckle a little to think "If you've already decided to implement Javascript Object Signing and Encryption (JOSE), whether you want JSON Web Tokens, JSON Web Encryption (JWE), or JSON Web Signatures (JWS), you should question this decision. You're probably making a mistake.", given the massive growth they've seen over 2-3 years in some very high profile cloud systems? But what to do...onward and upward then. :)

Finally I appreciate the dispassionate viewpoints, i.e. "I'm not attached to it in some way (I never am, to technical decisions)". Always to pleasure to work on hard problems and debate with folks who have adopted that perspective.

Anyway, well done.


Yea, I'm using JWT for authentication, but I still find myself doing a db query for the 'authorization' piece (so I don't bother putting any 'role' information in the JWT, since that seemed rather a bad idea to me, anyway).

Still don't have a great solution for revoking tokens before they expire.

At least it's stateless :)


Revocation isn't such a big deal if you have short lifetimes in your tokens and do auto-refresh (which is quite easy to do in a single page app)


What does it being stateless buy you?


Well, not much from a security standpoint. But I rather don't enjoy dealing with statefulness on the server, so I try to minimize it wherever possible.


Why is the server stateless? I thought you said you end up calling up the db all the time for authz decisions anyway. That sounds like you're still managing a bunch of state server-side.

Unless you're saying "my server process itself is stateful, all the state lives in the database": in which case, yes, but the tokens bought you nothing. If you have random tokens and you store them in the database you have the same situation with no crypto to mess up.


Yes, there's a database (pgsql) for persistence, though no 'session state' is stored there. All of the middleware/services are stateless.

And yes, I realize now that I could accomplish much the same thing without using JWT (a decision made some time ago, when everyone was raving about JWT), but I've got bigger priorities than ripping up my auth system (which to the best of my knowledge is working acceptably well) ATM.

But I do plan to take a look at it and perhaps migrate to something else in the future :)


Sure, sure: I'm not saying you have to go rip anything up right now (unless someone finds a specific vulnerability), but I was trying to eke out what exactly the benefit was :)


May I just point out that Microsoft in ASP Net core is really pushing everyone to JWT (for web api / spa anyways).

I am just trying to build apps, I could care less what protocol is in use, as long as it protects the users. Over the last few days of research (including coming across this HN thread https://news.ycombinator.com/item?id=13865459 from a few months ago) I can't help but feel a) trapped into using JWT and b) that is probably the wrong thing to use and I am going to regret it.


I do wish a different acronym had been chosen. When I search for "[language of choice] JWT" pretty much all results are relevant. But even if this new token schema takes off it will forever be a hassle to find relevant results for "[language of choice] PAST".


I spent two weeks (my Christmas vacation) working on rough drafts for several problems I wanted to solve in 2018. PAST was one of items I listed.

(The list is here: https://github.com/paragonie-scott/public-projects/issues/6)

99.9% of that time was spent trying to come up with a better name/acronym, without success. I decided to just give it a plain/obvious name until a better one surfaced.


Versioned Protocol Security Tokens: VPST

Novice-Proof Security Tokens: NPST

Just Verify Valid Tokens: JVVT

Safer Security Tokens: SST

Note: the first one is my only real suggestion, the rest are just for fun. And you are right, it is surprisingly difficult to come up with a good name.


Refer to it with a version number: PAST2, PAST3, etc..

You could also drop the use mapping: change 'sign' to 'public-auth' and explain that it's a 'sign' operation (a.k.a. digital signatures)


my favourite is PAST4 tee-hee


PINJWT - "PAST is not a JWT". // Please, do not take it seriously.

Edit: Good job on PAST and your other projects. I often follow your GitHub activity and publications to check on what interesting you are working currently.


Maybe just change "agnostic" to "neutral": PNST? "Independent" is probably better, but PIST may not be so great a choice (:


VJWT: Versioned JWT


This doesn't solve the criticism against JWT being used for sessions, which is one of the main point against JWT expressed in the very site linked at the top of the README.


There's nothing I can do at this layer that will stop people from using JWT/PAST/etc. as an attempt to build stateless session management systems for some ill-conceived "horizontal scalability" requirements, except maybe continue to tell people this is a bad idea and don't do that.

The rest of the points (i.e. the problems with the JOSE standards) are what PAST seeks to solve. The "do not misuse" problem is more complicated, and if I were to add e.g. "do not use this for stateless sessions" at the top in big red letters, that will only tell developers "this is unsafe, keep using JWT instead".


> that will only tell developers "this is unsafe, keep using JWT instead".

That's a good point. Maybe a header in the readme/docs like "Stateless Sessions", followed by "Using PAST/JWT/etc. for stateless sessions is a terrible idea, because kittens will die needlessly and painfully [ obviously using an actual summary of why ]. Don't just take my word for it, here are some resources explaining further..."


Why is this a bad idea exactly? I'm still very interested in the idea of using stateless sessions. Is it just that it's hard to expire sessions server-side, or is there more to it?


I've written about this at length here: https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba...

(It's also the first link in the README for the project this Show HN is linking to, FWIW)


Those are mostly the drawbacks of JWT, less so using stateless sessions altogether.

I found some additional reasons from a page that was linked from that last link here: http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...

  * They take up more space  
  * You cannot invalidate individual JWT tokens
The other reasons seem a bit weaker.

In your opinion, are those also the reasons why you wouldn't use PAST for stateless sessions?


> In your opinion, are those also the reasons why you wouldn't use PAST for stateless sessions?

Yep


This looks promising, and it's from a very well respected security researcher.


This library is a great example of clean and beautiful php code out in the wild!


I don't get it. We used to use something like a pipe-delimited string, then JWT, now PAST. Isn't the encryption doing all the work regardless of how the data is structured?


Getting the encryption right is pretty tricky! How do you verify the encryption method for a message, for example? That's a real problem in JWT: that's how RSA privkeys leak. PAST solves this by not negotiating. How do you make sure nobody's doing nonce reuse in ECDSA? That's a real problem in JWT. PAST solves this by only having (v2) specify exactly how to do that.

Just because this specifies a format, doens't mean it's just a format :)


Sure but to the GP's question, the format of json vs pipe delimted strings is not the problem, it's WHAT you put in the Json or string and what it allows (eg configuration) that is the problem, correct?

GP: Basically PAST is just limiting your options down to things we think are secure combinations and eliminating things we know are insecure. However it does this with WHAT it puts in the JSON, not that it's JSON vs anything else. As you said that's just serialization.


Sort-of, but not necessarily. It's a lot easier to get the format right if you know there's an authenticator and you know what length it is and it's totally separate from whatever is coming next -- so you still want clear, out-of-band signaling for the real data. Once you have that format, you're right that the exact serialization doesn't matter.

The extreme example of this is XML DSIG and XML canonicalization in general.


Agreed, but there's not really something inherently insecure about JSON vs concatted strings vs binary (well readability on that one). It's all about the JWT spec being complicated and easy to mess up.



A common security issue I've seen with uses of JWT doesn't have to do with JWT itself but how it's used by front-end developers. It's commonly stored in localStorage instead of an HttpOnly cookie, which creates a cross-site scripting vulnerability.

More details here: http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-fo...

Shockingly, the advice I've seen to protect against this by folks like Auth0 is "keep your tokens expiration low" or not mention it at all.

I don't imagine PAST gets around this, as it's more like misinformation around the storage mechanism, but I think it's worth mentioning in any "how to use" section about PAST or JWT.


This is somewhat orthogonal to the security goals I'm trying to tackle, but still very relevant to the ecosystem that currently uses JWT.

So I've opened an issue to address it before v1.0.0 is tagged: https://github.com/paragonie/past/issues/14


If something is a HttpOnly cookie, and I get XSS, why can't I just hit your API as much as I like anyway?


For Python developers: I've started an implementation here: https://github.com/JimDabell/pypast


I had the impression that the main problem was that JWT was marketed as stateless and superior and then you were stuck with stolen tokens.

How does PAST solve this? Is it even possible to get secure stateless auth?


Why not propose changes to the RFC instead of creating a new standard?


"A secure alternative". Citation needed.


I'm not a security expert but when I looked into JWT I was terrified at how easy it was to screw up. Glad to see I'm not the only one.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: