Hacker News new | comments | show | ask | jobs | submit login
A practical proposal for migrating to safe long sessions on the web (developers.google.com)
101 points by kinlan 279 days ago | hide | past | web | 78 comments | favorite



Note that the article buries a much easier way to deal with this problem:

"some websites don’t verify the user’s authentication on each request (i.e. there is no way to revoke the session cookie once issued)"

Which implies that if you do verify the session on the server and have a mechanism for invalidating these sessions on password change, etc, you can just use very long cookies and you're done.

There are very good reasons for doing that anyways, so no need for this hack.


You don't need to verify the session on the server all the time. Session verification very quickly becomes a bottleneck to app performance, since you're doing a separate database/cache lookup for each rest endpoint request. It's far easier, and provides much better performance, to just cryptographically sign a cookie that says "I'm this user".

This is where the short/long-term cookie comes into play, and is actually a neat idea to take care of the "how do I periodically check that the user hasn't logged out/changed their password/invalidated their sessions/etc" problem that the cryptographically-signed cookie brings up.


Signing something is great but it comes with some possibly unintended consequences.

If you want a stateless cookie, you won't be able to revoke it down the road. Which means if the cookie is long lived, someone who gets their hands on the signed cookie can impersonate a user for a long time.

If you cryptographically sign a cookie and it isn't stateless, you haven't really improved anything because you still have to look into up.

Cryptographically signing a cookie is great, especially for distributed applications, but it does have some properties that might not be desirable.


Still learning about this, but couldn't you: 1. Sign a cookie. 2. Store revocations in a database (remove after session expiry time). 3. Use a distributed bloom filter to avoid db hits for every request. ?


Yes you could, but you still end up with this dependence on a lookup when you store revocations.

Once you do that you lose the stateless benefit of cryptographically signing the object, and at that point you could just store the whole object and give the user a lookup id to the object without the complexity of cryptographically signing anything.


A table of unexpected revocations will be smaller and less write-intensive than a full table of sessions. Should be by a very large margin. And eventual consistency is fine.

So that could be the difference between whatever you're using for storage, and a tiny, fully replicated in-memory structure. Depending on your overall scale, of course.


> at that point you could just store the whole object and give the user a lookup id to the object without the complexity of cryptographically signing anything.

Only if the collision of lookup IDs (accidental or malicious) is effectively impossible. If it's possible to generate a collision, then you've thrown away your security.

This would also effectively give every server the ability to issue auth tokens (and mutate them in the DB), which is not a great choice if you care about security. But if you're handing out unsigned lookup IDs, you probably don't.


What purpose could signing a lookup ID possibly serve? Are you worried that someone might correctly guess the 128 bytes of the identifier and hoping they won't guess the 32 of the signature?


You're using 128 byte identifiers? Why?

And actually, yes, if I were using 128 byte identifiers for some reason, I would still be concerned about malicious parties being able to cause collisions. Securely generating random numbers does not seem easier than authenticating.


Just to chime in since I'm the OP of this subcomment.

Generating 128 byte identifiers is ridiculously overkill. I can safely say if you generate 128 byte identifiers securely, the earth will cease to exist before a collision is found.

You would be safe generating cryptographically secure 128 bit identifiers and looking them up. It is trivial to generate IDs that will not collide.


My point is that, if someone is going to guess the N bit identifier, they can also easily guess the signature.


Depends entirely on how well you generate your IDs. Yes, if you generate cryptographically strong IDs of sufficient length, then you don't need to sign.

Signing avoids the need to generate secure IDs, though, and can also avoid hitting to the DB for expired tokens. (Load from expired tokens is probably not a major concern, though.)


That sounds like a time-space trade-off to me. I'd think cryptographic signature verification would be more expensive computationally than a cache lookup and compare, but not require much space. Also, and maybe this is the key part, it would be more distributable since you don't need shared state.


Cryptographic signature verification can be expensive, but the previous commenter (and, to be fair, pretty much the whole software industry) is abusing the term "signature". Really, what we're talking about is cryptographically authenticated messages. Verifying an authenticator is extremely fast.


Can you clarify what the difference is here? How do you verify an authenticator without verifying a signature?


Pedantically, a "signature" is the product of a public key signature algorithm. It's what you use when you have multiple and/or anonymous verifiers.

An authenticator is the product of an authentication algorithm, which can be as simple as a keyed SHA-3 hash (a hash over the data to be authenticated and a key known only to the verifier).

Session tokens virtually never use signatures. The only party that needs to verify them is the serverside of the application that generates them; they're opaque to clients.


So you're assuming the session tokens are authenticated with a shared key. I guess that makes sense from a perf standpoint. It seems iffy from an isolation standpoint, though. I wouldn't think you would want every server holding effectively global auth keys.

Edit: I guess this is actually a good case for the 2CH described here. The long-session cookie could be cryptographically signed, while the short-session cookie could be authenticated with a shared key that gets rotated frequently so that a leak of the key cannot cause a long-term compromise.


I don't know what you mean by "isolation", but virtually every mainstream system that stores secrets in tokens uses MAC constructions, not signatures. Signatures are much more complicated and dangerous.

Don't ever use public-key crypto unless you absolutely need it.


Isolation meaning that every web server cannot issue authenticated cookies for arbitrary users. I'm clearly not a security expert but that seems to greatly increase the damage scale of a single web server breach.

Also, how would you store a secret in a token without encryption? Or was that just a misstatement?


In practice, you do one of three things:

1. You encrypt and then authenticate data, such as user profile information, so that clients can neither read nor alter it. This doesn't involve public-key encryption.

2. You authenticate data, but don't encrypt it, because it isn't secret but must not be changed; this is what you'd do if the only semantic value being passed in your token as UUID, for instance.

3. You do neither of these things, and instead generate cryptographically random tokens of sufficient length that they can't be guessed, and then use those tokens as database lookup keys.

All three of these things are lightning fast on any decent processor, including the one on your phone.

You are very likely to cost yourself more security than you gain by trying to do anything more complicated than one of these three things. Again: don't ever use public-key crypto unless you really need it.


Thanks. Sounds like I need to do some more research on this. This isn't how I thought this was generally implemented by the big players, but I've not worked closely on auth either.

I guess for 2 and 3, you handle revocation by just deleting the relevant row or flagging a "revoked" column. For 1, add a unique id that you can add to a revocation list? Are there better/more typical ways to do this?


Revocation is a problem for encrypted tokens; it's a downside to the approach. The common solutions aren't particularly elegant.


A fourth option would be just like option 3, except you only use part of the DB token in the lookup and compare the remainder in constant-time if and only if the record exists (if you're worried about the database lookup leaking a valid authentication token).

Agreed that public key crypto isn't really appropriate here.


He means that the auth key has to be shared between N servers, ie you can't have a scenario where a server can verify all the others' signatures but can only issue its own.



Yes; and CPU is cheap as hell. Round trips to a cache are expensive.


That depends on where your cache is.

If your cache is on a separate box and you have to make a network round trip, yeah it is expensive.

But if your cache is in-process, the access is cheap.


Also, CPU tasks like authenticating a message are cheap to scale horizontally. A cache is a centralized bottleneck.


You can re-verify on the backend however often you like.

Make the signed cookie include user-id and date (and maybe a session id). Accept it without backend-check while it's less than X minutes old. Buf if it's older than that, check the backend to see if the user has invalidated sessions since it was generated. In the typical all-good case, generate a new signed cookie and put it in the response. No need for service workers or anything.

A web company I worked for a couple years ago implemented something almost exactly like this while I was there.


A simpler solution is to simply keep the connection open until the session is complete. That way you can use the long-lived session token (in the cookie) each time the user visits your site/app and then just keep a WebSocket open while the user is connected.

This has many advantages beyond the short/love-lived cookie stuff.


Actually a good reason might be scalability. Authenticating every single request creates a big load on your authentication service. That's a lot of RPC's, particularly if you're using single sign-on and those RPC's are coming from every web app you run.


The point is that verifying on the server can be a pain. TFA's proposal avoids doing so, much of the time. One could imagine rolling out any number of parallel "service workers" to perform this task without needing to complicate the server with concerns about a single user's requests hitting multiple servers that therefore need to stay on the same page with respect to token revocation.


The session has to be fetched from cache/hot memory on every request, right? Authenticating the session per request may be expensive, but actually clearing the session cache so the request can't even get the session data - that's much better.

So you don't need to invalidate the auth tokens. You just need to be able to delete a session from the cache to do revocation on long sessions. And this works with just backend changes, no client hockey-pokey needed.


"The session has to be fetched from cache/hot memory on every request, right?"

Nope. Let's say you have a rest endpoint that accepts a Pokemon id to add to your collection. If you cryptographically sign a cookie that says "I am this user id" you can assume the request to add the Pokemon to your collection is valid. There's other business logic to perform, sure, but "I am this user" is taken care of for you.

What Google is proposing is the ability to save a long-term cookie that says "I am this user" and a short-term cookie that says "I'm signed in". The requests periodically say "am I still signed in/am I still this user" and the server can periodically (eg not every request) look that up in cache/database.


My point is that at some point, you have the control on the server. Of course, you can encrypt the session information in the cookie, and decrypt it on the server end, verify signature, and certify that the user is who they say it is.

This just moves the problem a bit - if you want to log out a user, you can just erase the cookie key from the server side; the cookie will be just random noise at that point.

This approach, of course, assumes that your keys are user specific, possibly generated from the password.


You're right, if every authenticated cookie must also be validated against the DB, there is no point to using authenticated cookies. TFA describes short-lived authenticated cookies, which need not be validated against a DB, used in conjunction with longer-lived DB sessions, which are only validated against the DB when the short-term cookie has expired. Those DB sessions end on logout or reset or whatever.


Would there ever be a reason to have the cookie expire at all?

If the password is changed, you can invalidate the session, but the cookie remains?


I wonder why this couldn't just be handled with two cookies, period: one a short-lived authentication token; one a long-lived revalidation token.

The former could be self-certifying (i.e., trusted if it's properly signed, with no auth service round-trip); the second could require a round-trip to the auth service). On a request, when the server see that the short-lived token has timed out, it checks the long-lived token; if still valid, then it reissues a short-lived token and sends it as a cookie value replacing the old short-lived token.

If multiple requests are in-flight, no matter — short-lived tokens require no session state, so all that happens is generating a few too many signatures.

Am I missing something?


Co-author of the article here.

An unstated goal (we probably should have stated it, looking back) was for sites to be able to easily move legacy sites with legacy endpoints to long sessions.

The service worker approach can effectively intercept requests to any endpoint and perform the re-generation of a short cookie if needed, without needing to change every page on the site or the legacy endpoints.

One additional benefit is that it minimizes the transmission of the long term token, which is generally good if you're worried about it somehow getting intercepted. You may or may not be too concerned about that risk though.


I second your question. The only downside of sending both cookies on each request is a bigger request size, which is not a bad trade-off for much simpler client (no service workers, everything is managed by cookie jar)

Also, the approach with 2 cookies (statefull + stateless) is known for a long time. Eran Hammer (author of OAuth1 and OAuth2 specs) wrote about using this approach at Yahoo here: https://sideway.com/room/4Z (search for Yahoo).


Isn't the original problem that someone could steal the long-lived cookie in the first place?


The intent is not to provide an "unstealable" token, but a revocable one. You can't (don't need to) revoke the refresh token but you can revoke the access token by disallowing refresh.


The long term cookie requires authentication from a service (e.g. maybe a database backed session), and the service can invalidate a particular cookie on logout, or by user request.


I don't really understand what the service worker that makes an extra network request adds here.

Why not just have the code that validates the session just do an `if (short_session != valid) { lookup_long_session_in_db() }` then return a fresh short session cookie with whatever request you're currently handling?


Pretty much this. Especially if you are already using promises in your library, having an async function that returns the token to you solves that problem. Now you don't have to worry about where your token comes from, if it immediately resolved or if it had to do a round-trip to refresh the token etc.


That's the other option; sending both session cookies at the same time and having the server do the work. Google's solution pushes that "if (short_session != valid)" logic to the client vs having it run on the server.


1. This assumes that every page you might make the request to can issue auth tokens. This might be fine for small sites where auth isn't a concern (but then just issue a long-lived token and don't bother with all this), but for larger sites, capability to issue auth tokens is generally isolated from the rest of the services.

2. Your pseudo-code returns a legitimate short session cookie in response to gibberish. I'm hoping that this is just because you elided a lot of detail.


Co-author of the article here.

I agree with dpark - an unstated goal of this proposal was for developers to be able to easily move legacy sites with legacy endpoints to long sessions without needing to update those endpoints or pages.

The service worker approach can effectively intercept requests to any endpoint and perform the re-generation of a short cookie if needed, without needing to change every page on the site or the legacy endpoints.


1. Why does every end-point being able to refresh a token (or whatever you want to call it) imply that auth must not be a concern? I will fully admit that I've never worked on really large scale web services, so hopefully that's not a super obvious thing.

I would think that having all pages be able to handle auth is just a architecture question. Maybe I've just been too deep for too long in writing a webserver in Iron [0], but the way you handle auth in it, by using middleware and wrapping any endpoint that needs auth with an auth handler, makes doing something like this trivial.

2. Definitely left out a lot, I mean it doesn't really return anything at all. Just meant it as some shorthand pseudo code to make explaining it shorter. :)

[0] https://github.com/iron/iron

EDIT: Actually now I'm not sure if you're talking server-side or client-side, but I think the argument still stands. I was assuming auth "tokens" could actually be cookies, but even if not, your API library just needs to have a check to see the the request returned updated auth. It's still just in one place, handled in all calls, rather than just the single call that would otherwise handle it.


I was referring to server side. If you're really serious about security, you probably want to isolate the login process to a dedicated server. If your web server is breached, do you really want your auth keys leaked? And do you really want access to your user's creds available en masse to the breached server?

If you're running a small scale service, you're unlikely to build a dedicated login service. If you're running a large scale service, you probably should.

Edit: Based on Thomas's answers here, I am wrong about this, so I guess take my thoughts with a grain of salt.


The benefit of the service worker is that they can intercept all fetch events and would allow you to move validation code out of the thread that is running the UI. It can make for a simpler client, but really it's just shifting the complexity to another layer. However, there are a lot of potential benefits to service workers (eg. caching, background downloads, notifications—tho possibly a negative) so perhaps the added complexity is ok.


Right, but I'd really doubt that "store new auth token if it exists in response" would take long enough that it causes any kind of problem.

I know that there are benefits to using service workers in general, I guess I was just thinking about it in terms of adding a service worker to an application that isn't currently using them just to do the handshake they're suggesting. In which case it would be a lot of added complexity and wouldn't work across the board, whereas doing the check on the server side would add very little complexity and would work with every browser made since about 1995.


> We all love how native apps will ask you to login only once and then remember you until you tell them you want to log out.

I wish that were true more often. There's a handful of native apps that don't remember my credentials and I have to go look them up on the desktop in my password manager. For example, I installed Pokémon Go but haven't looked into it further because I don't have my Google password memorized; it's a randomly generated password that I expect my computer to remember for me.

It seems like every native IoT widget controller I try wants me to remember more credentials.


Get a password manager that syncs to your phone. I'd be completely lost without it. I love 1Password for syncing. Just pop it open, copy/paste, done.


I'm aware of these options and use LastPass. My issue is that I shouldn't have to copy and paste at all. These apps should be using system services for this (e.g., the Keychain API on iOS).


I use KeePassX and Keepass2Android, with the database file stored on Dropbox. It's free and runs natively on Linux.


Since upvotes are invisible, I will comment to say I second this. Keepass2Android is a amazing and I love this solution, even though I've always otherwise hated password managers.


You don't necessarily need to authenticate every request. For example, Amazon.com allows a user to be "semi-logged in", but forces you to authenticate yourself when doing something like making a purchase.


That's what we did except that the short lived token is a GWT no in a Cookie. While the Long Lived Hash is a Cookie with a database. But the Cookie will only live for the session of your browser. Mostly we only query the database every 5 minute, cause of that.

Actually we also have a workaround to support Safari by trying to put stuff in LocalStorage that tells the other Tab that it recently got a new token and so on.


So is this like oauth2 refresh token?


In the long run it would be better to find a standard way of doing this so that browsers implement it without the need for a worker initiated from JS. Static pages could benefit from safe long lived sessions as well.


I've explored this in the past. Wouldn't it be possible to simply set two 1yr cookies one at the mid-expiry point of the second. Then re-set the first when it expires and vice versa?


But if the user resets their password, it should log out all the user's sessions. But you have to wait for the cookies to time out before they are re-authenticated. (Or make a check on every request to see if the password has been changed, which is just a pain.)


You don't check on every request to see if the password has changed, you check to see that the session is valid on every request. You should be doing that anyways.

Then all you have to add is a way for password changes to invalidate the session.


The thing is that the way that a session is usually "validated" these days is to just store a userid in some serialized format then encrypt that and send it off as a cookie that expires in a relatively short amount of time (and hopefully put the expire time in the encrypted message too). Then the server just has to try to decrypt the cookie and if it results in a valid userid then the session is valid. This way you don't cause an extra database request for every user request.

I'm not saying it's good or secure, but it's the way I often see it done/suggested to be done.


I figure if you need to re-issue or initially issue the two cookies you set one of them to expire after 6mo and the other 1yr.


Or you can just forget about the cookie on the server side - you don't need to expire anything.


I just hope that that other tabs cannot see the cookie within a tab. Or if there was some other way of isolating identity.

My concern is privacy.


It's just a cookie. It works like every other cookie.


Yeah, but I mean before they implement this scheme, I hope they modify the browser (or someone comes up with one), so that you can pick which group of cookies are accessible within different tabs.

With apps, I know that this app doesn't know my identity from that app unless I explicitly give it permissions.


Firefox is starting a project on this, it's still in research stages though. https://blog.mozilla.org/tanvi/2016/06/16/contextual-identit... In the meantime, you can use different user profiles in Chrome or Firefox, but they work per-window not per-tab.


Great news. Thanks. Didn't realize that Chrome and Firefox do that on a per-window basis. I browse everything in incognito mode.


Firefox is working on it, you can try it in the nightlies, at least as of this post: https://blog.mozilla.org/tanvi/2016/06/16/contextual-identit...


thanks.


actually a cookie is still only accessible if its on the same origin or on a subdomain of the origin.


I did not realize that. If that is so, why is Firefox working on the tab-cookie isolation (see other answers).


This is mentioned in that blog post, but data is separated by a tuple of: (scheme e.g. HTTP or HTTPS, domain, port). Any page from one such set cannot access cookies, local storage, indexeddb, or cache from a page with a different set. The new feature simply adds one more attribute, so even if pages share scheme, domain, and port, they will not be able to read data from a site with a different userContextId.


thats more to seperate work and private data. I.e. at the moment you can't login to Office 365 twice. With a tab isolation level it would work. But actually its more like a container which you can assign to multiple tabs. you could start a tab with the container private or with the container xyz




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: