Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Short session expiration does not help security (sjoerdlangkemper.nl)
696 points by ColinWright on Aug 18, 2023 | hide | past | favorite | 419 comments



The main place I see short session expirations is on banking and financial apps, which seems defensible to me for a couple of reasons:

1) They're used by a wide variety of people, including people who may not own a computer or mobile device, or who may not have a backup device to use when their personal device breaks. This group is probably shrinking—more and more people have smartphones and the remaining people who don't have smartphones are probably also the people without bank accounts—but arguably you want to cater to this group, since they may already be a bigger target for scams.

2) They're very appealing to opportunistic attackers.

3) They're used by people in stressful or unusual circumstances, e.g., when traveling, if they need to make a large emergency payment, or if they're afraid of being scammed.

4) Most sessions aren't very long anyway: checking a balance, seeing whether your paycheck came in, making a single transfer or payment, etc. There are definitely exceptions—15-minute timeouts are very annoying when doing taxes, for example—but it seems like the annoyance in these situations is potentially worth the security benefits.

That being said, I don't know if short session expiration is the best solution in 2023. As the article points out, major corporations like Google don't use short sessions, even though their services are used for a variety of sensitive things and they're huge targets. But as briHass pointed out in another comment, they also provide tools to see which sessions are open and use a variety of techniques to detect sessions being misused. I suspect that's the actual best solution if avoiding session stealing is that big of a concern.


My main bank uses username + password + (random subset of ‘memorable word but actually unencrypted password’ driven by select boxes), then 2FA on top, and it literally feels like they just slapped a bunch of things together to add extra barriers to auth in sequence.

This is the UK. Back in Latvia I would just slide my ID card into a USB reader and cryptographically sign the session with a passcode. Same as chip & pin.


I'm Norwegian, but have lived in the UK since 2000, so I've seen the evolution step by step, and UK banks have pretty much just kept adding new steps every time fraud got too bad instead of rethinking things properly. It's infuriating.

Norway has a near universal electronic signature solution not quite the same as your Latvian one but sounds similar-ish (and there's a EU/EEA requirement to require mutual recognition at some point) and it feels much better (e.g. the Norwegian ones allow access to government websites as well, and can be used to sign contracts etc. - I'm assuming the Latvian one has similar options).


Yeah your ID in Latvia has an electronic certificate for governments, banks, etc


I’m so envious of the Baltic’s use of technology at the institutional level.

It seems literally infeasible for the United States to have state-issued PKI. It’s a meta-partisan issue: no side trusts the other to manage crypto or computer systems.

So for example instead of a public many-to-many digital publishing platform we are stuck with the whims of grown men who want to fist fight each other in the Colosseum and all because every X is convinced every Y is so despicable and corrupt that such a public system would be much worse than what we have now.


The USA was distrustful of national ID cards decades before the current level of hyperpartisan rancor. We just don’t the government enough to let them track us everywhere.

Not that the current climate helps. The REAL ID Act of 2005 has been delayed so many times, now to 2025. Maybe we should just repeal it instead.


> We just don’t the government enough to let them track us everywhere.

That statement is both true and emblematic of the problem. Everyone is so cynical that a clean public-key, offline, certificate-based solution, with an absolute zero visibility to the government of who is doing what, would always be assumed to be part of some nefarious Illuminati/Democrat/Republican plot. Proving it to people with technical tests wouldn't prove it. They'll always be convinced.

What we need from the government is so minimal really, just to sign certificates if we prove our identity. But it'll never happen in the US due to the distrust situation.


Because there has never been a proposal for a "clean public-key / offline / certificate-based solution". It's always a central database that also does x and y and will be available for w and z departments to do as they will with it.


It doesn’t matter though, because the percentage of Americans who can understand public key crypto even in a vague way sufficient to critically read a technical analysis and judge it to be true, you could count on one hand. Nobody has proposed it because if any politicians have seen such a proposal on their desks, even if they themselves grasp it (unlikely), they know most Americans would freak out on principle.


Why are you talking about hypotheticals and abstract "distrust" ? We're already being harmed by existing central ID systems - social security numbers and drivers license numbers - with little hope for reform in sight. If there were something like the GDPR that prohibited companies from nonconsensually demanding, storing, or using these identifiers for anything but their bona fide governmental functions, then it might make sense to talk about adding a new system. But until the government is capable of reigning in abuse of the existing systems, any new identification system would inevitably be a gift to the surveillance companies.


> We just don’t the government enough to let them track us everywhere.

Eh, that’s the pseudo-libertarian objection (which is… dumb as bricks, practically speaking) but the most common objection is one of access.

Until IDs are both free AND easy to obtain in the US (if you think they are, congrats: you’re in the lucky group) then a national id scheme will always be a non-starter.

IDs face the same problem voting has in recent years (and historically): the systems of power use limiting access as a tool to control those they don’t like.


> The REAL ID Act of 2005 has been delayed so many times, now to 2025

Can one fly without a REAL-ID-compliant license or ID?


Yes, with a passport you can

... Probably not what you were trying to ask. I thought the deadline passed for using real ID for TSA but I haven't looked for that info in years


The deadline has been pushed to May 7, 2025.


> no side trusts the other to manage crypto or computer systems

Nor should they. Too much power for governments to have. Every so often some news gets posted here about some government official who just does not give a shit about people's rights, you can actually feel the contempt when you read what they say.

Government solutions are non-solutions. We should solve these problems with ubiquitous technology or not solve them at all.


Except our only solution to this problem today IS government provided already (ID SSN or Passport).


Which is dehumanizing. We are not cattle to be identified and marked.


But you need a way to identify people. If there's no good way to identify real people (government issued ID number), those that need them will invent bad ones (SSN, phone number, maiden name, maybe more). And companies like Google have their own identifier for you already, anyway. And you can't travel anywhere without a passport that is another identifier, anyway.

I'm impressed how much Americans dislike the idea of a mandatory government issued ID. I don't mean it in a bad way - this sounds like a very principled and idealistic stance. I just really like my government issued ID, and consider it a reasonable tradeoff.


I'm not american. Not only do I want the government to know nothing at all about me, I also want it to be illegal for others to develop those alternative identification methods. Those who "need" such things should have to figure out a way to avoid needing it.


Interesting. How do you identify criminals. Or for that matter, how do you identify anyone? "Just don't" or ... Some other solution?


I assume their version involves another individual unaffiliated with government or private industry taking their hand, gazing meaningfully into their eyes, and tearfully proclaiming “I see you” before letting them through the border or into the bank vault or whatever.


Ah so you are the man with no name, I assume?


My parents giving me a name is one thing. The state giving me a number and arresting me if I fail to produce that number to police on demand is another.


Which one of those is problematic to you? The state giving you a number or arresting you if you refuse to identify yourself? Because you've given yourself a nice straw man to fight by conflating them.


Oh he knows that, but that’s all their type has: faux-bravado against imagined threats.

Not least: they’re ignoring that simply having an identity is a traceable & trackable thing… because that would derail their paranoid fantasies where they’re a hero, fighting against a dystopian world.


It’s funny you think there’s a difference


It's funny you think there isn't a difference between the humanity of your parents and some faceless governamental bureaucracy.


It’s funny you think there is a separation between the two in any practical way.


If the government wants to do something they should publish open source code that implements a sane authentication system, and then have no part of operating it whatsoever. If it's good and free people will use it voluntarily. If it isn't then you certainly don't want the same people implementing anything mandatory.


The best part is that the US Government already did this -- it runs the second largest PKI. Second only to the Internet. It has issued more than 20 million certificates to individuals.


That's the part you don't want them to do though. Centralized PKI is bad for privacy and creates a single point of compromise. You don't want this, but for the whole population:

https://en.wikipedia.org/wiki/Office_of_Personnel_Management...

What you want is some well-reviewed code that a bank or utility company can "apt install" onto their server and get secure decentralized web authentication working in five minutes instead of leaving them to create some custom in-house contraption designed by a rotating committee of middle managers.

And they should really endeavor to break that PKI thing into smaller, independent, less centralized pieces. It's way too big as it is. There appears to be something called "Symantec" between "Federal Bridge" and "US Senate" and then another "Symantec" between "Federal Bridge" and "Naval Reactors" -- that doesn't seem great.



tbh I can't think of many cases where I'd want state-issued PKI. I already use public key crypto secured by biometrics (faceid) to make payments, login to my bank, etc. None of that requires interacting with the state. For most normal computer use I quite like how random social media sites can't demand my identity because it'd be too expensive for them to verify it using public records.


> I’m so envious of the Baltic’s use of technology at the institutional level.

You shouldn't. I don't know if anything changed in the last few years, but having to insert your ID into a card reader is very cumbersome from a user experience perspective. Especially since the world is moving away from having to plug stuff into computers.


> Back in Latvia I would just slide my ID card into a USB reader and cryptographically sign the session with a passcode.

I think this is a good authentication model, but it costs money. There is the upfront cost of the physical card, and then the higher cost of lost account recovery. I think that's the turn-off to most banks; they will have to staff a call center that can verify your ID, issue a new card, and then deal with your immediate concerns because you can't access the website. Passwords + security questions are often free, "oh you don't know your password? we'll just email you another one if you remember your first elementary school's name".

At the end of the day, they are allowed to cut costs on critical infrastructure (everyone's money!!) because we have a strong victim blaming culture in the US. If your password is guessed, it's because you're bad at picking passwords, not because passwords are an intrinsically flawed technology. If your money is lost, it's because "it's really cool to have transactions that can't be reversed, you should have done your due diligence". It surprises me that everyone is OK with this.

(Incidentally, the only place where I've seen the option to use cryptographic authentication is on Vanguard. They added it right about the time we were testing security keys / U2F at Google, and they were the administrator of our 401k plans. I think Google strong-armed them into implementing it! Would love it if they did this to some banks ;)


You’re taking a well-known system, criticising it, and burying the fact you’re from the US well inside of that.

We’ve already solved this problem outside of America you know.


In Norway, Latvia, Belgium and probably others, the card is issued by the government. So there is no cost to the bank to re-issue a lost card.


Ah, that's the key. We'd never get a national ID in the US, instead ironically forcing the costly KYC onto each individual bank. (And Twitter now apparently.)


There's already a Federal ID required for banking in the US: your social security number.


Except that when it was first implemented, the government explicitly stated that the Social Security Number was not a valid ID.

https://www.npr.org/2018/03/22/596180023/how-social-security...

Due to this, it is fundamentally missing multiple levels of security. You can actually guess someone's SSN within 2 digits if you know their date of birth, and location of/hospital of birth.


> random subset of ‘memorable word but actually unencrypted password’

This annoys me so much with my bank. Their app lets you enroll your account into Face ID authentication, then still ask one of the recovery questions every time.

The UX is awful. I'm so convinced that complaining anywhere won't actually help so I don't bother.

I'd love to know what the hypothetical attack scenario is that drove that decision, but I suspect there isn't one, and the app saves the username/password in encrypted device storage.


The more crap they shove in, the more some manager can boast about "improving security". And no one in the company wants to be the one saying no to "more security" so no one pushes back.


My 'UK' bank recently dropped using the password completely, likely because someone pointed out to them that since you could simply reset the password via the SMS 2FA, it was essentially pointless. So now the SMS and a 5 digit PIN are all it takes. Eventually they'll figure out that the PIN can be reset via 'just an SMS' too.

'security'


I was about to tell you to switch banks, then I realised you just described my bank.

Yeah, I don't understand why none of the mobile-friendly banks are willing to build even a basic web interface for use on the big screen.


Spains banks (I’ve used two so far) simply use your ID number which is used in a lot of places and not considered secret and enforces a 4 digit password.

It’s an absolute joke.


I wondered once about this, but it kind of make sense from the point of view of usability.

Unlike any webservice, you usually have very few attempts to make a successful login before getting locked out, so even if it's four digits, the odds of a successful brute force attack are very low


I suppose so, I just find it funny really that my bank has less password requirements than most (if not all) online services I use


Bank Of America requires to tell them a 2FA code sent over SMS, when SMS literally says:

   <#>BofA: DO NOT share this code. We will NEVER call you or text you for it.
No, it wasn't scam, seen that process physically visiting a branch on agent's display multiple times.


My bank does that for in person visits but you key in the code on a PoS style keypad at their desk


Most banks in Spain require physical presence in the branch for 2fa


You mean to set up a second factor, they require you to go into a branch?


In Latvia what happens if you lose your ID?


Training people to always login makes the susceptible to phishing attacks with fake logins. It becomes second nature to put in your email and password. Whats even worst is that banks won't send messages over email instead they make you sign into their "Secure" message center. Some how email is insecure but sending the same info in a physical mailing is safer. Any time you select electronic statements instead of physical mailings, they send you an email and force you back to their site to login.


Not only that, they put the link to the site in the email.


To be honest I feel like biometric app unlock has largely made the need for the short tokens banking experience obsolete. I don’t want to change my credential ever 15 minutes, I just want my bank app to verify my biometrics on sensitive operations. The only real reason for short lived bearer “tokens” these days is so you can deploy them in scenarios without revocation lists.


This is how the banking app my team built works -- on Android/iOS devices a hardware-backed keypair is generated and when a login is needed, the keychain is unlocked using local biometrics to perform a signing operation which authenticates the user.

There's a bit more to it than that because we support remote attestation, and you only get a read-only token until you've performed remote attestation (which generally happens quickly).

edit: The authentication results in a short-lived token (5m), a refresh token (20m), but can be re-authenticated with the keypair challenge at any time.


> major corporations like Google don't use short sessions

Ask a Google employee. When I worked there sessions were limited to 20 hours. Beyond that full re-authentication with password + security key would be needed.


I’m pretty sure they mean for consumer accounts, not their own organizational logins.


Why'd you leave?


Dissatisfaction with the speed at which Google does things due to organizational reasons. Lots of smart people working very hard, only to have little to show.


There is a class of people who feel a smartphone is too much of a distraction to carry around all the time. They will carry around a flip phone that can do basically nothing beyond calls, sms, and camera. These are otherwise perfectly normal members of society with bank accounts.

I don't know how common these people are. I happen to know a handful of them but I probably don't travel in typical circles.


I live in a very Hasidic Jewish neighbourhood in North America; they (almost) uniquely use flip phones instead of smart phones.

I've even seen women with cordless landline phones mounted to their shpitzel so that they can use it "hands-free", but without breaking their rules around kosher technology.

Some context:

https://www.theguardian.com/world/2022/jul/19/kosher-phone-d...


In case it wasn't clear, I didn't meant to disparage people without phones or computers (or for that matter, people without bank accounts), just to note they are a small group of people and shrinking.


The people I brought up used to have smartphones and made a conscious decision to go back down the tech tree. So an inflow to the no smartphone group. Certainly not outweighed by the outflow so the group is still shrinking as you state.


Smartphone sounds like the worst device you can use for banking.

If remotely compromised, the attecker gets everything they need. The SMS second factor, the user's password, access to user's network, and the user's behavior profile to know when to execute the attack so that they get as much time for it as possible.


On the other hand I would assume that modern smartphones from manufacturers that care about security to some degree are much harder to compromise compared to random Windows laptops.


Pixel phones had exploit discovered last year where you could unlock any phone by entering the PUK code. You just replaced the SIM card with your own SIM for which you knew the PUK. Seems very easy to me.

https://9to5google.com/2022/11/10/pixel-lockscreen-unlock-bu...


Health services also log you out after ~15min. Kaiser, One Medical, Epic Mychart, etc. Very annoying


Some of my highest-paranoia sites and apps are things like my dental insurance. Yeah guys, hackers are out to get me and they can't wait to impersonate me and reschedule my next cleaning for an inconvenient time!


In my experience, a large percentage of compliance officers believe that this is a non-negotiable requirement for HIPAA compliant web apps. My reading of the Security Rule is much more pragmatic, so I would argue that there are other ways to meet this standard in many situations.


I think reasons 1 and 3 I listed for financial apps and websites apply for health services, but 2 and 4 don't, so I see why they do it, even if it's less clear-cut.


I see pretty short timers in enterprise saas as well. Reason being that having a license for 400 users is much more expensive than a license for 30 users.


Pretty much all SaaS products charge per unique user, not active sessions, so I don’t really understand what point you’re trying to convey.


Nearly all of our agreements are in active sessions. I guess your business just needs better negotiation?


> but it seems like the annoyance in these situations is potentially worth the security benefits.

I would be happy if there was a way to just request a long session for that. No need to force everything into a short session just because it is a sensible default.


I'll add one more: Fintech app users constantly ask for short sessions.

As the developer, I really didn't want to add it, but who cares what I want.


> I suspect that's the actual best solution if avoiding session stealing is that big of a concern

that big of a concern for whom? Google doesn't care because Google has constructed a world where when something goes wrong, sorry, it's on you.

Your bank does care because if something goes wrong, it's frequently on them. The bank times you out to protect them more than you.


In a lot of cases, short session expiry is used as a hack around subpar authentication standards such as SAML/OIDC where there is no reliable backchannel for the identity provider to tell the service to expire sessions (following a credential change, user being deleted, etc).

The short session expiry is used as a workaround to force the third-party service to regularly check-in with the identity provider, thus placing an upper bound on how long IdP-initiated changes take to reflect on all third-party services.


This is correct, but its uncharitable to call it a "hack" in many contexts. In oauth, for example, the access token / refresh token concept is literally spelled out in the spec. It's not a workaround, its how you implement eventual consistency in a loosely coupled system where the IDP can't push updates to clients because it doesn't know all of them by design


It's fair to argue that point. If you need something like that, then this aspect of OIDC etc. is not a hack. But really really few people take a look at the question of how to integrate an external identity provider and then decide that loosely coupled, eventually consistent is the right choice. Instead developers mostly just choose whatever seems sufficiently popular and build their system around it and only look somewhere else if the popular choice is visibly much worse at its job than the alternatives. ("visibly" with the knowledge about the topic at hand that is. Most people I've talked to just see OIDC flows as a given fact about how authentication has to work.)

From a practical perspective, there are lots of applications out there which are perfectly reachable from the outside and which use an OAuth2/OIDC library as a standard component where they could forward an update from the identity provider with a simple library call. And think about how much edge cases in front-end applications could eliminate, if you wouldn't have to be ready to get a new token at any moment, because the current one has just expired. [1]

In my opinion, pushing updates to clients should be the default of identity protocols which you only opt-out of, if you have special needs. And then hopefully documentation tells you very clearly to have very short token expirations.

[1] And yes, you technically still have to be prepared for that at any time, but you can push the trade-off of making that case less user friendly much further, if it occurs only seldom.


Hang on, we're talking about user sessions and you're talking about access tokens.

Short expiration of sessions is bad because of the terrible UX. Access tokens can be refreshed without user interaction, so it's not the same issue there.


"Session" here is the word used for the duration in which an access token is valid. You may be talking about UX, but the submission is talking about access tokens.


The article specifically mentions the need for users to re-enter their username and password as a downside of short-lived sessions, so I think the author's definition of "session" extends as long as the refresh token lasts.

I think that most of the non-short-session examples — Google, Microsoft, GitHub, etc — are using an access token + refresh token pattern.


That's because it's a poorly written article by someone who doesn't know the difference. It interchangeably talks about issues only with the UX and the actual technical backend pieces involved.


The length of time an access token is a delegated authorization, not an authentication session. For first party mobile apps and the like, they might act similarly, but for other use cases they will not.

The access token may be so my account at an event coordination site has free/busy access to my Google calendar, and that authorization might last for years.


> Access tokens can be refreshed without user interaction, so it's not the same issue there.

Not on mobile, when the app is not in foreground or gets killed by "energy saver" mechanisms - Samsung is fucking annoying in that regard, even on 4GB RAM and more it keeps closing Chrome with 10 tabs after a minute or two and it completely loses state, as do many games - even taking a call in foreground can be enough.


4GB is not a lot on Android so 10 tabs sounds about right. You need a lot of wiggle room for garbage collection to be efficient and you can't swap to flash without burning write cycles and power on small devices.

That aside, I don't see any technical reason why you can't renew a token that expired 1 week ago. Renewal just makes sure nothing changed (Eg user hasn't been deleted) while you were gone. It doesn't have to do any user-facing auth


> In a lot of cases, short session expiry is used as a hack around subpar authentication standards such as SAML/OIDC where there is no reliable backchannel for the identity provider to tell the service to expire sessions

Both SAML 2 and OIDC have standard mechanisms to expire sessions.

One problem is that sessions are always a per-site, bespoke technology. Flagging a session as expired in a back-end database isn't going to help if the front-end uses cookies holding JWTs as an optimization.

So some sites prefer front-end expiry (which is also standardized by both). Some sites won't bother to support either.

Add on the inconsistent behavior of cookies across browsers these days, and it becomes very hard to support. It has been prioritized out of most things.

There is also the issue that sign-out doesn't make sense for many things. Logging out of Google in my browser shouldn't kill my Discord desktop session just because I chose the SSO option for authentication.

SLO makes sense in enterprise scenarios (where many big SaaS products tend to still not support it) and in single-party consumer scenarios - where SSO is used as integration glue to make something that "looks" like it is all one site, such as first-party Google logins.


> /OIDC where there is no reliable backchannel for the identity provider to tell the service to expire sessions

That's what the OAuth/OIDC refresh token is for: https://oauth.net/2/refresh-tokens/


There’s a solution for this! It’s called SCIM and it lets you sync user updates from the directory so you can expire sessions when users are deactivated.

(I work at WorkOS.com which helps developers with this.)


I'd rather rely on session expiration rather than on the fact the SCIM sync works well. I implemented SAML and SCIM services. With SAML you implement things once, tweak it a bit, and then it works with all the IdP, even those you never heard of. SCIM on the other hand have only 2 client implementation that I'm aware of (at the time I worked with it at least) and they were sufficiently different from each other than you kind of had to do 2 implementations. Not to mention it uses stupid JSON patch thing that are crap to work with unless you use mongodb or similar I guess. And stupid limitations on forcing the sync on AzureAD that I forgot the details of.


How would those other technologies deal with the situation where grandma signs into her banking account via app on her phone and then gets distracted, leaving her phone unlocked on the table for teenage Jimmy to find? My bank uses short sessions so I get signed out within a few minutes of inactivity. Long sessions would seem to leave grandma wide open to this sort of local attack.


By asking for credentials/auth for every meaningful interaction.


The main problem is the way these standards are implemented. OAuth 2.0 (on which OIDC is based) does define a reliable back channel for session revocation, although it is still pull-based instead of push-based. The scheme is simple:

1. Use short-lived access tokens (forcing clients to refresh often)

2. Check for revocation on every token refresh

There is even an OAuth 2.0 RFC for a token revocation API[1], and an Open ID Connect extension for backchannel logout[2]. Unfortunately, many OAuth 2 implementations (especially these were refresh tokens are JWT-based) do not support revocation of refresh tokens at all.

The other big problem is that refresh tokens are too often misunderstood. I've consulted quite a lot of development teams who implemented OAuth 2.0 (both as a client and an AS/RS), and most of the developers did not initially understand what the refresh token is meant to do. This resulted in a lot of wrong implementations.

If I blame the standards for something, I'd blame them for being too complex and flexible. This goes without saying for SAML - nobody should be using that if they have any choice. But even simple OAuth 2.0 needs care. There are many RFCs out there, and if you just read the original RFC or one of the many low-quality guides on the interwebs, you probably would miss the point about how to properly use a refresh token. RFC 6749 subtly hints at this strategy I mentioned above, but never fleshes it out as a recommendation, let alone mandates it.

OpenID Connect is even worse. It introduces a new type of token (ID Token) that has unclear purpose and security, encourages JWT use without setting up a standard for revocation, reinforces the insecure implicit flow and introduces a whole new similarly-insecure flow that serves no purpose (the hybrid flow) which serves no purpose except for making sure there are more vulnerable apps out there.

Both OAuth 2.0 and OIDC can be implemented securely, but the base standards are not guiding you on how to do this, and - in the case of OIDC - contain way too many footguns. I think the OAuth 2.1[3] is a step in the right direction. GNAP[4] (a.k.a. "OAuth 3.0", "XYZ", "TxAuth") looks to me like a step in the wrong direction (even more complexity), but perhaps it's too early to tell.

[1] https://oauth.net/2/token-revocation/

[2] https://openid.net/specs/openid-connect-backchannel-1_0.html

[3] https://oauth.net/2.1/

[4] https://oauth.net/gnap/


> The main problem is the way these standards are implemented. OAuth 2.0 (on which OIDC is based) does define a reliable back channel for session revocation…

The larger issue is that even if you had a reliable revocation system - most relying parties wouldn't use it.

The typical relying party supports logins from Facebook, Google, Apple and the like because the user chose to use those as authentication systems. However, the relying party is independent. The user would not expect other sites and desktop apps to suddenly stop working because the user hadn't visited Facebook for a certain number of hours.

There were efforts in the past using transparent pixels on the identity domain to do distributed session tracking - e.g. if I'm interacting with _any site_ using the login, that will encourage the session to stay alive. Turns out that was way too much visibility for social login providers to have.


Hand waving away the threat of application use on shared devices seems a little over confident to me. This is probably not a threat for company devices, but it is clearly a threat in other environments, i.e family members sharing a device. While some users might expect to be logged in all the time, others expect to be logged out after they close a web app tab. Session expiration should be application specific. Google's sessions do not expire, so that more user data can be collected. That is clearly more valuable to them, than compromised accounts due to session hijacking. Them doing it, is not a great use case for others, because their value proposition is entirely different than for most other web applications.


> Google's sessions do not expire, so that more user data can be collected. That is clearly more valuable to them, than compromised accounts due to session hijacking. Them doing it, is not a great use case for others, because their value proposition is entirely different than for most other web applications.

I think this is a pretty bad take. Google runs some very sensitive applications for paying enterprise customers, and they still tend to not expire sessions.

I also really don't like Google as an ad company, and I think my trust in their judgement has fallen precipitously over the last decade, but I find it hard to compare them to someone like Microsoft and say they're doing worse on the security front (I don't think they are).


Both Google and Microsoft have world-class security teams. But Google is run in a more effective way (so fewer instances of the right hand not knowing what the left is doing) and has less legacy to lug around, it also suffers less from the innovators dilemma (for now, at least).

It makes a big difference if your stuff has been designed as a web based service from day #1 or if you are required to talk to anything and everything on prem and off prem as well as in the cloud. The attack surface of a typical Microsoft enterprise product is absolutely gigantic and the fact that they do as well as they do is something to be appreciative of. That said I don't want their stuff anywhere near my company.


> Google runs some very sensitive applications for paying enterprise customers, and they still tend to not expire sessions.

For Google Workspace, web applications (e.g. Gmail or Calendar) will regularly force you to re-authenticate "for your security". It's not a daily thing fortunately, but it is common enough to be frustrating.


Session expiration length is a configurable setting by the domain admin, it's not enabled by default.


Oh is that why it asks for re-auth all the time? I thought it was built in. Nice, I’m going to disable that.


> Google's sessions do not expire

They kind of do. I use a lot of machines that I might only hop on once a month or so. Chrome sync often ends up in a "paused" state where I have to re-auth. YouTube will fall back to a not-signed-in profile on me and I'll need to re-auth every now and then. Loading up Gmail will have me re-auth again pretty often. Often its not a full re-auth with my security keys but it'll at least challenge a password. I get these challenges probably every week or so across all my devices.


If you live with a family, you don't have any security margin: an unattended computer gets the next user immediately and a lot can be done in just one minute.


> Perhaps you used the shared computer in the library to access your web application, and forgot to log out.

> Is this a thing? Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all.

Yes, it is a thing.

I understand you would like it to not be a thing.


Heh, author gave a perfectly reasonable example of where use might be shared, and immediately asked "Is this a thing?".

Like, yes. It is. You literally JUST gave me an example of it.

Also this, shortly after:

> Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all

That "should" is doing a lot of heavy lifting. You don't decide which security controls to implement based on your best-behaved users.


It's not even about behaviour.

Some government services switching over to fully digital means there's a cohort of people being left behind. A decreasing number, sure, but a number nonetheless.

Effectively the author is saying poor people who need to use library PCs shouldn't get security.


The article bringing up that scenario and immediately dismissing it actually convinced me to change my opinion to the opposite of the article's thesis. I generally haven't seen the need for short session expirations in the past (when I've thought about it, which isn't often), but I hadn't thought about the shared-computer scenario before. Keeping that in mind, and knowing that it can't be handwaved away (as you point out), short sessions make more sense to me now.


It reminds me of when I didn't understand why my library account has such a short expiration time. Almost every time I open the library's website, I have to re-enter the password. Why? What's so important about a library account? Who's going to borrow a book on behalf of me?

And then I realized that logging into your library account is probably one of the most frequent things on all the shared computers in the library.


Even if it is a thing - after using a shared computer one MUST log out. If the 15 minute expiration time saved you then you're just damned lucky!


That's reasonable to do, but not necesarily to ask for What if you loose your internet connection and can't log out? Or have a power cut, or have to leave in a hurry, or drop dead on the keyboard while using the computer

Unfortunately for devs, RL is messy and even if you can convince some people to do the best thing, if you're large enough you have to go by Murphy's Law and work around the people that you know won't / can't


I have a feeling a lot of people get "lucky" a lot.


> Also, it would be better to protect against this by securing the logs or using hard drive encryption.

This one line is emblematic of the flaws in the article.

My take on the article is, “Imagine that everything else in a system is done correctly, and the system, overall, is perfectly secure. In this imaginary world, short sessions don’t help.”

One fact about security which you cannot avoid is that any one particular security feature may fail or be bypassed in some way. What are the consequences of this? Well, it means that you want multiple layers of security. Your server runs its daemons with minimal privileges so that a remote execution vulnerability needs to be combined with a privilege escalation vulnerability. You don’t think about the “right way” to secure something, but you think about multiple ways to secure something, and don’t stop securing it just because you’ve found one good option.

Short sessions are there because there are various ways that sessions could be compromised.


The author also puts lot’s of faith on the user not doing stupid things:

“Is this a thing? Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all, no matter how short the session expiry time is.”

Yeah, users might just leave their bank logged in a open and logged computer library. That’s why short sessions exist for those as this will limit the window of opportunity for a bad actor to do something bad. Not perfect of course, but limit the exposure.


> Not perfect of course, but limit the exposure.

It's a shared computer (and if the session is carrying over, it's not just shared hardware it's a shared account).

In this case - you are utterly fucked if you think that machine is secure. Hell, fuck the session, I'll just run a keylogger (or if I'm not admin, install a malicious browser extension) and capture your whole login - I have considerably more access to this machine than I need to compromise you in a large number of ways.


> I'll just run a keylogger

You could. But what is relevant is whether you have. The point is not to protect against a determined attacker, but to reduce the chance of an opportunistic attack.

It really feels like most people have little experience with shared computer resources any more, because pranking people who left their computer unlocked used to be practically a sport, even when the same people would (mostly) never go out of their way to attack a locked down account.

Screen locks became a thing long before mobile devices for a reason.


Except these public or low bar access machines are there for generally vulnerable people - elderly, homeless, migrant workers, or others who generally don’t have a dedicated machine they can secure. Most people don’t “think” about security at all, it’s beyond their education and familiarity. Banks, for instance, offer access to their services to everyone - not just hacker news readers. They’ve started adding fees to in person or phone interactions directing people who can least afford the fees towards online access, which they often don’t have direct access to. ISP fees can be expensive, computers as well, or in the case of elderly they often just don’t know how to get started and value accessing via the library. If you’re homeless you don’t have a place for an ISP subscription, and many access devices at shelters. People in jail or prison also use shared devices.

Yes they’re overall very exposed to bad actors and the machines they use are really insecure. So, short lease credentials definitely reduces blast radius. If you have a key logger, but no automated account drain scripts, a 5 minute timeout will effectively prevent you unless you’re actively watching.

Finally, some of these are regulatory requirements for better or worse. That doesn’t forgive the regulation, but it does sometimes explain the policy.


Maybe the machine is secured by not allowing anything further to be installed on it. Or maybe it's not. I've heard most crimes are due to opportunity, so it's best to protect your users from accidentally leaving themselves logged in on a shared computer for the next person to find that opportunity.


And get ATM machines work every day on the principle of asking you to just re-enter your password on potentially harmful transactions.

Let’s also pretend like there’s a security camera looking at my desk that is usually not monitored, but you don’t know that


Let's say the bank also uses 2FA (say, physical code calculators) - your next step?


Install system malware - wait for next login (which will be soon, since the short session is forcing repeated logins) send session token to myself.

Done. Now I have an active session. Don't give a fuuuuck about that 2fa device.


This is a bit like saying, "there's no point in having a lock on my door because somebody in my house can shoot me". The fact that an outer ring of security can't protect you from people who are already in an inner ring doesn't invalidate the outer ring.

If you already own somebody enough to install whatever malware you want on their computer, then sure, session lengths aren't going to stop you, but they're also not intended to. Session lengths are intended to stop the guy at the coffee shop who grabs your computer when you go to use the bathroom.


Got it, so people who aren't privileged enough to own a device shouldn't use the internet.


> Users might just leave their bank logged in a open and logged computer library.

Fine. Then add an option where I can press a button in order to signal the bank that I'm at a secure computer, and that I'd like to increase the session timeout to 1 hour for this one time.


Another scenario is a large office. If a user leaves their desk to get a coffee, an attacker could walk up and get access. Of course, in that case they could also install key loggers, MTM software etc. so they will get access to anything they want.


It's better to let users do stupid things so they learn from their mistakes and not do them ever again. And probably tell their story to their friends and family so they, too, don't do this. Putting all these excessive guards in place kinda encourages ignorance and tech illiteracy.


You have far too much faith in stupid people.


>One fact about security which you cannot avoid is that any one particular security feature may fail or be bypassed in some way. What are the consequences of this? Well, it means that you want multiple layers of security.

I think the problem here is that no one ever attempts to define what "multiple" is in layers. Most seem to agree that one layer often isn't enough for the reason pointed out.

The issue I take is no one knows or provides any sort of guidance of how many layers are enough. People working in security take a degree of liability in their jobs, perhaps careers, for compromises. As such, all the incentives are to add as many layers as possible, everywhere. So we get N-factor Auth, continuously expiring passwords, biometrics, regularly expiring physical access credentials, sessions that are increasingly fleeting and often work requires layers of these sessions so you end up with multiple compounding needs to reauthenticate to the network, to some remote application debugger, to some application running within an application, and so on.

I often work in secure environments and it gets tiring, tiring to the point I admittedly start to take shortcuts to make my life easier, shortcuts that ultimately defeat a few layers of security in some way to keep my sanity so I can... do the actual work.

So the other extreme of 1 layer of security is N layers of security where nothing is ever usable. There need to be some reasonable protocols in place and people who write them need to be forced to actually go through the use cases with a combination of efficiency/production quotas. Only then do I believe security starts to realize an appropriate balance of layering security and creating value. In the current state of affairs, a lot of places are off the rails where they add so many barriers for normal use they create behaviors in authorized users to open very bad holes whereas if a few layers were peeled back and appropriately managed, the overall security of a given system would likely improve.


> I often work in secure environments and it gets tiring, tiring to the point I admittedly start to take shortcuts to make my life easier, shortcuts that ultimately defeat a few layers of security in some way to keep my sanity so I can... do the actual work.

Yeah, and without getting as agitated as that other guy in the comment thread, this is where security becomes its own enemy. And how achieving effective, practical security is a lot more subtle than it seems.

For example, we've been talking about putting 2FA onto the identity provider allowing access to internal high privilege administrative interfaces, for example via Duo, which requires you to authorize a login via your phone. However, an important question there was: How long would we trust a 2FA authorization?

If every single login after a session expiration requires 2FA, people would riot and/or start looking for workarounds. That's not great. But eventually, we ended up trusting a 2FA authorization for like 12 hours. That's one work day, even if it goes longer. Usually this means you have 2 Duo pings in the morning - one for the VPN and one for the internal IDP. That's entirely acceptable imo.

And in a similar sense, SSO can increase security while increasing convenience as well. For example, our ADFS allows sessions of about 4 hours, so if you access it once in the morning and once during midday, you stay logged in. And this in turn allows systems like the Keycloak acting as an IDP to work with very short session timeouts. As long as you're working, your keycloak session remains active. Once you're not active for 5-10 minutes, you're logged out from keycloak - but that's just 1-2 redirects on the next click and you're back in.

And once this is simple and convenient, people want to use this. And suddenly you got rid of a mess of local accounts in different systems and everything is based on the central directory. And that, in turn, is more valuable than validating MFA multiple times a day in the grander context, at least in my book.


> “Imagine that everything else in a system is done correctly, and the system, overall, is perfectly secure. In this imaginary world, short sessions don’t help.”

Isn't the reverse also true?

Imagine that everything else in a system is done incorrectly, and the system, overall, is completely insecure. In this imaginary world, short sessions don’t help.

At any point between "perfectly secured system" and "completely borked security in every single layer", in what way will short sessions help?

If you've got a good and secure setup, with maybe one or two holes you don't know about, how does a short session help?


Short sessions are obscurity...not security. If you use a serious website like Fidelity's, they don't let you do anything impactful without an authentication challenge. You could have logged in a few seconds ago...want to tinker with bank accounts? Challenge.


Fidelity also logs you out after a short duration of inactivity…


Short sessions are there because, for all practical purposes, SLO doesn't work and we are using short sessions to simulate high-latency SLO. In a previous life we supported SLO properly, and it had no value at all.

It's one of the many sad realities of the modern world that basic functions don't work and no one will fix them.


Also pointless: "your session will expire in X minutes; click Continue to stay logged in" alerts. That just turns a short session into a long session? And provides a handy session.renew() call for the attacker?


I tend to agree with respect to machine sessions where tokens are handed around and persisted and the cost of reauthing bothers no one. Sessions should be as short as is reasonable for performance.

For human applications you should generally not expire general access but expire leases to critical actions. Generally there are more and less sensitive realms of actions and read that a user can do, with a non linear continuum of risk and user pain. Logging in and browsing your content - fine. Mutating, deleting, viewing sensitive like credentials, creating new credentials, changing credentials, etc - reauth either every time or with some short window like 5 minutes.


Fully agreed. Just because 1 security measure doesn't prevent all malicious attacks, doesn't mean that it "does not help security". It's just fundamentally false because some malicious attacks rely on long expirations, therefore for those attacks, this method does help security. Not all malicious attempts are refined or perfectly executed and sometimes a user can simply rely on a token that lasts too long.

It's a clickbait title and it worked. A title like this would be much more accurate: "Short session expirations provide less security than you might think"


> This one line is emblematic of the flaws in the article.

I don't think you can reasonably dismiss this article based on that line.

(1) It's just an aside, not part of the main argument; (2) It happens to be true.

Anyway... "Who knows, it might help" is OK (well, better than nothing), when there's not a significant cost to pay. But when there is, you need to go deeper.

There's a tradeoff being made and if you don't think of it that way, you're going to make a poor decision.


> Short sessions are there because there are various ways that sessions could be compromised.

You haven't actually addressed either of the author's points, though. Namely:

1: A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

2: The vast majority of ways to compromise a session already give you access far beyond that session itself (ex: you have user access on a local machine, or physical hardware access, or you're an admin who manages that user, etc/etc/etc). So an expired session is, at most, a small speed bump in those cases.

So... going back to the point: If expiring sessions is terrible UX (and it is) and it's not stopping attackers (and it's not), why are you doing it?

----

> You don’t think about the “right way” to secure something, but you think about multiple ways to secure something, and don’t stop securing it just because you’ve found one good option.

This attitude is cancer. Let me throw another quote at you:

"A ship in harbor is safe - but that is not what ships are built for."

----

Security is ALWAYS (fucking always, yes really - fucking always) a tradeoff.

The most secure application runs completely isolated, with no input or output, and is totally, utterly useless. But no worries - it's secure!


Can you please not post in the flamewar style to HN? I realize it's the tradition in some online circles, but it's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


> This attitude is cancer

> yes really - fucking always

You're really angry with that straw man you've stood up. The OP isn't saying systems have to be locked down to the extent that they're useless. Who would ever argue for such a thing?

Note that "You don't stop securing it just because you've found one good option" is NOT the same thing as saying "You don't stop securing it until you've closed every possible security gap and compromised usability".

Be charitable in your interpretation. Your OP is making a simple argument that defense in depth is a part of a security posture, precisely because they agree with your core premise, that security is always a tradeoff. Because we won't choose to implement every security measure, and because the security measures we do implement will be flawed, or compromised by the need for the application to actually be usable.

Step outside, breathe some fresh air, maybe lay off the coffee first thing. And consider not throwing 'cancer' bombs at passers-by.


Please don't cross into personal attack, no matter how bad another comment was or you feel it was. It only makes things worse.

(I appreciate that you're arguing for charitable interpretation, but the personal swipes at the beginning and end of your post wipe out the good effect and then some.)

https://news.ycombinator.com/newsguidelines.html


The point is that it's a really bad tradeoff, because the impact to the users is high, and the impact to security is low. And yet we do it, because "You don't stop securing it just because you've found one good option", and that's a really bad reason to improve security by such a small increment with such large negative consequences.

The problem with 'defense in depth' is that it comes as close as possible to locking down systems to the point of uselessness, without actually, technically, preventing work from being done.


Ding ding ding!

If you want to defend in depth - more power to you.

If the way you're "defending in depth" is mostly not adding security, and is actively making the product less useful... I'm going to call it shite.

If you blindly say "defend in depth" without actually... you know... evaluating what that defense does to the product as a whole, you're doing your job poorly.


The rules of engagement on this site call not for radical candor but for taking the most charitable interpretation of someone's words.

Maybe the OP wasn't blindly saying 'defend in depth'? Maybe they were advocating for evaluating what that defense does to the product as a whole? If they were, is their attitude worth describing as cancerous?


If that was my entire comment - I'd probably agree with you. Good thing there's about 5 other paragraphs of content in my response that provide additional context...

My take: you're stuck on the word cancer as some sort of insult, rather than an analogy. I'd argue you're being fairly uncharitable in your responses - and further... you're yet again not engaging with the actual on-topic discussion.

Have a good one.


> Ding ding ding!

> If you want to defend in depth - more power to you.

> If the way you're "defending in depth" is mostly not adding security, and is actively making the product less useful... I'm going to call it shite.

Agreed. Sure, there is defense at different depths, but there's no reason to add depth without adding defense as well.


The other thing security people fail to realize is that when you’re hostile to UX, people start coming up with all sorts of workarounds that leave you less secure than you were before. Like the corporate managed laptop is so full of spyware that users bypass it and use their own personal device for development.


We don’t fail to realize that.

Security folks are humans too.

We realize that every human loves convenience and security removes conveniences. Simple As.

No matter what we do as security folks, the users will do everything possible to return to their convenience or complain about the inconvenience until the security is removed.

I’m not saying there aren’t over zealous security folk but our goal isn’t to make humans lives harder. We want to make it harder for the bad guys to ruin humans lives.


> human loves convenience

Except that it's not a matter of 'convenience', it's a matter of being able to do their jobs. Security is a hard job, in part because you have to come up with security practices that are actually workable, and keep work impediments to a minimum. It's really easy to just add more restrictions. It's hard to add security that doesn't impede the users. When I see 'defense in depth' being invoked to justify massive work impediments for minimal security improvements, I don't see effective security practices - I see a cargo cult.


not your objective is make the organización loss the less money posible by reducing the incident rate the recovery rate or the impact if you damage the org more the risk you are saving against you are liability, this isn't good vs bad thing, this is decide when the line is worth crossing and this article say at least in their opinion this open isn't, you still have multiple other layers.


That's a reasonable point, sure.

One which can be made reasonably, without telling anyone that their attitude is cancerous.


I don't think that appealing to courtesy is really the play here.

We are discussing ideas about security in a place and manner that allows us to have honest and frank conversations.

I think security teams optimizing only for security is actually very apt analogy to cancer: Part of the organization is acting in a manner that is negatively impacting the organization at large - while positively impacting that subset of the organization.

Cancer is the act of some cells in your body prioritizing themselves at the expense of the whole.

Personally - I think you're digging to find an insult in that comment, and I take it as a way for you to disengage with the topic at large.

This is an attitude that is routinely used to shut out voices that don't match the current "dress code".

Trust me, I'm hardly going to be calling you cancer over the dinner table for not passing the salt. I'm using that word intentionally and carefully - in a frank and honest conversation. If you're feeling hurt (especially on behalf of someone else...) maybe go do something else?


> The problem with 'defense in depth'

No, that's a problem with bad engineering. That a process requires skills most people attempting it don't have isn't a problem with the process, it just means that it is hard and relatively new.

One thing I see all the time that demonstrates this incompetence is talking about something being more or less "secure" without reference to a threat model. You simply can't make reasonable tradeoffs without thinking this through, and yet nobody wants to do the exercise.

In fairness, this is not just an engineering fault. I've seen one case where a legal department freaked out when they heard about a risk analysis project in pursuit of a formal threat model - they vehemently objected to anyone producing documents about such things that could potentially surface in some discovery fight.


> One thing I see all the time that demonstrates this incompetence is talking about something being more or less "secure" without reference to a threat model. You simply can't make reasonable tradeoffs without thinking this through, and yet nobody wants to do the exercise.

Hey - I completely 100% agree. Believe it or not, I did quite a stint in software security before becoming this jaded (5+ years fulltime work at a security focused product sold primarily to large fortune 100 companies [banks - it was all banks]).

I think my problem is that for any difficult challenge... there is an answer that is simple, obvious, and incorrect.

My opinion is that the incorrect answers I see most are the two extremes: I don't care about security (BAD!). I only care about security (WORSE!!!!).

The first will eventually lead to compromised accounts/data and that can kill a company. The second will lead to products no one wants to use, which WILL kill a company.

Neither is a good spot to be. You want to find an appropriate compromise in the middle: Secure enough.

----

Side note - no one truly does the threat assessments based on threat model because no one in industry likes the answers.

For small and inconsequential threats - you are already secure enough.

For nation states - there is likely no solution that is workable if the thing is on the internet.

It's like trying to buy a secure door for your house: For most folks walking down the street, the current door is fine. When the Gov shows up with tanks - there is no door you can buy to solve the problem.


> Note that "You don't stop securing it just because you've found one good option" is NOT the same thing as saying "You don't stop securing it until you've closed every possible security gap and compromised usability".

It seems to me that 'defense-in-depth' can very easily become a mantra that inevitably means drifting towards the latter. What are the real guidelines for telling when enough is enough? Because ime people who even can articulate anything along those lines are way, way fewer than people who make appeals to defense-in-depth.

And I think this is part of the problem: without a principled way to assess what is gratuitous, repeated appeals to defense-in-depth will lead to security practices that heavily favor having more measures in place over having a good UX. This is because the environments where information security is most valued are already organizations that frankly, do not give a shit about UX. The customers for cybersecurity products are massive bureaucracies: large enterprises, governments, and militaries. The vendors that sell those products are embedded in a broader market where no software really has to be usable, because there's a fundamental disconnect between the purchasing decision and the use of the software. For all B2B software, the user is not the customer, and it shows in a thousand ways. In infosec things are further tilted in that lots of easy routes to compliance which are terrible for UX are falsely perceived and presented as strictly required, perhaps even by law.

The idea that in a B2B market which primarily serves large organizations and governments, you will get any organic weighing and balancing of security against usability 'for free' is sheer fantasy. So where is the real counterweight to the advice that on its own recommends 'always add more, unless you have good reason not to'?


I think this is a more elegantly stated version of my argument.

It's also why I strongly link this argument to cancer. It's an idea that grows unbounded until it's harmful, and by the time the organization realizes the harm, it's often too late to change.


> I think this is a more elegantly stated version of my argument.

Yep. I think your argument pretty much conveyed the same thing, along with a lot of anger and frustration.

I also agree with that anger and frustration— I've felt the same rage before, when I've been hit with blockers or UX degradation related to nominal or actual attempts to improve security. Restrictions that are ill-motivated (or whose motivations are just not clearly or convincingly communicated) are infuriating.

> by the time the organization realizes the harm, it's often too late to change.

This worry is the twin of the rage, for me, this sense that I can't do anything about it and it's never going to get better. A dreadful, reluctant admission to myself that the only way to stop the continual degradation of my work life will be to uproot myself: give up my job and everything I do like about it, leaving behind people I enjoy working with and reducing the amount of contact I have with them.

Happily, engaging directly with my company's infosec department directly often gives me hope and allays these fears somewhat. But generally, online discussion with people who implement security controls tends to reinforce my worry that, to borrow your metaphor, the disease is systemic and terminal.

Most 'cybersecurity professionals' (who are visible online, at least) transparently do not give a shit about UX, display flagrantly antagonistic attitudes about users and developers, and talk often about defense-in-depth but never articulate any inherent limits for appeals to defense-in-depth beyond 'well don't bother with measures that don't increase security at all'. All of it sends strong signals that people who value UX, DX, autonomy, morale, and well-being, to the extent they are present at all, are outliers in infosec who do not belong and have no hope of being effective.

And then the response to someone openly including a dimension of emotionality in an argument about a security measure they feel is gratuitous and cumbersome is

> Did... did [a cumbersome security measure] hurt you?

Like, seriously? Yes. Indeed it did and does.

But more than the security measures themselves, the pervasive attitude conveyed by that belittling question is the even bigger problem. And it generates many of the small ones.


> You're really angry with that straw man you've stood up. The OP isn't saying systems have to be locked down to the extent that they're useless. Who would ever argue for such a thing?

You are - you're literally arguing for it right now. Short sessions just don't help all that much, and they have an outsized impact on users.

Why are you dying on this hill? Likely because your mindset is "security above all else" and that mindset is literally cancer.

Security above all else leads to a product that does fuck-all else (and it does it poorly).


Literally cancer? Really?


On the plus side, it seems HN has discovered the cure for cancer.


Yes. Literally.

I would say it makes an excellent analogy: A part of the whole (security) is prioritizing themselves and their needs in a way that makes the overall organism much less capable and effective.

Cancer.


Still metaphorical cancer.


Not according to the definitions of those words that I can see

Metaphor: a figure of speech in which a word or phrase is applied to an object or action to which it is not literally applicable.

Cancer: a practice or phenomenon perceived to be evil or destructive and hard to contain or eradicate.

My usage is literal. Organizations that act this way have cancer. They suffer from it, they can be treated, they can die.

It's as real as it gets. I am using cancer in the body as an analogy for this form of cancer - but I am not using a metaphor.


[flagged]


FWIW, it seems pretty clear to me that you are the person who is being insulting here. I mean, I can't even figure out how to reply to this comment without making it all about you and how you are involved in this thread and the words and strategy you are using, as that's what you are doing to horsawlarway... who, notably, was addressing an idea and an attitude, not a person and their behavior.

Like, if I put myself in the shoes of the people each of you are replying to, I can see how to reply to horsawlarway: if I disagree with their interpretation of what I said, I can argue back; if I realize that I said something poorly and didn't mean it, I can apologize and correct it; if I disagree with their analysis, I can push back with my own arguments...

...but your comment? You are just pointing at someone and calling them rude. You are then not only refusing to engage with their comment or their points, you are just telling them they shouldn't even be discussed or listened to because they used a word which, to be quite frank, is not insulting; and, in doing so, you have dragged this discussion from one between people who were passionate about an interesting topic that affects everyone on this website--one where I was excited to read both sides--to a bunch of people--sadly, now including me!--squabbling about how words are chosen and how arguments are formed while pointing fingers at each other about who is being insulting, which is a waste of everyone's time so bad I frankly feel bad about how I now feel like I also have to take part.

(BTW, I actually did at first choose to not send this comment: I wrote it, and then decided it was just further feeding you and further wasting other peoples' time, so I put it in a text file and moved on; but, then I noticed one of horsawlarway's responses had been flagged--even though you are clearly the instigator here--so I would need to post a defense of why I vouched for it: you are the person who not only took this thread in a personal direction, but you are the person who decided to start throwing around playground-style bullying: saying an idea is cancer is an opinion and analysis, not an insult... but all of your comments on this thread are patronizing and are the kind of taunting people use to start fist fights.)


> So... going back to the point: If expiring sessions is terrible UX (and it is) and it's not stopping attackers (and it's not), why are you doing it?

Everytime I get into this kind of discussion, the answer seems to be "because it makes me feel better". Which is why it's so impossible to actually change someones mind about it and thus we have security "experts" (or worse, non-technical managers) making life miserable for thousands of users.


As the GP said, the attitude is cancer, and spreading is what it does.

It really doesn't help that "security expert" is a job title. That means this person is expected to deal with security, not business objectives, and will be blamed for security incidents, not business goals satisfied. If the OP's attitude is cancer, this is what causes it, and it's completely toxic.


Exactly. Anyone working in a moderately large company (500+ employees) or in a regulated industry is familiar with this issue. Just as no politician wants to be seen as soft on crime, no CISO want's to be blamed for a security breach. So more and more security gets piled on to every process and application. MFA everywhere, even though you're required to use MFA for the VPN. SSO everywhere, because why not. Session expiration after an hour. VPN limited to 8 hour sessions (great for people who work long periods or who have long running processes and don't want to use tmux/screen etc.

My team (20+ SREs/admins) spends roughly 40% of our time complying with either security requests, external audits, internal audits, internal queries from Risk/Security about servers etc. Figure the cost to our business of this (just from our team alone) is roughly $1M per annum. Add in the cost of all the "security" software, add another $2M per annum. Staffing of Risk/Security team is probably another $2M per annum. For all the other workers in IT (250 or so) probably another $3M. For non-IT workers, the added friction is easily 10% of their cost, so $15M. Add it up and you're around $25M.

And it's not even the out of pocket costs, but the opportunity cost. Thank god I work in a business that has regulative moats that prevent real competition.

Cancer is a perfect term for this IT culture.


The reason why your business can "afford" this inefficiency is due to regulatory moats.

Big companies are like governments, and when you remove outside market pressure, they become even more so.


The point I was trying to make about that is that most businesses don't have the financial resources to absorb the cost of "security" as it's being practiced now. Yet it's continually being foisted upon them as a necessity.


Yea, we agree on this.


What do you have against SSO everywhere? I see it as actually one of the things that makes obvious sense. It makes the user's life easier and improves security. It makes it easy to give a new employee access to a bunch of systems at once using RBAC. That and putting users' SSH public keys in LDAP and using that for auth everywhere instead of passwords are two obvious pure wins to me.


SSO is great in a monoculture environment, but falls down in heterogenous systems. We have Azure SSO, Ping Federate, and a couple of others we're getting rid of. I think it adds unnecessary complexity, and fails too often. Our internal users don't like it because the failure modes are opaque to them compared to a userid/password.


“We can’t afford to not do it”


Sometimes the answer is "so our sales team can tell people our security is better than our competitors."


Hah, I worked for a software security company. This is literally the entire industry.

Check these boxes to be compliant to make that enterprise sale.

Is it actually more secure? Who cares, insurance will cover us now and the enterprise paid us.


I still have PTSD from dealing with HIPAA and SOX stuff. Don’t run your own credit card processing if at all possible.


Same thing with selling software to the government. "Is the software secure" is far less important to those customers than "does it pass the scan".


> 1: A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

Need a citation on that “usually” part. A short session duration most definitely:

* Makes it less likely that when an attacker obtains a session token that is is unexpired.

* Gives the attacker less time to use a valid session token to move laterally into (potentially) unfamiliar infrastructure.

The trade-offs, of course, are:

* Poorer UX, and potentially driving users to attempt to bypass approved tools and/or security controls

* More interactions with the authentication system, which, depending on the auth system and the attackers motivation/capabilities might actually let them harvest more credentials.

But, in any case, I’m not aware of any research that short session tokens don’t thwart attackers use of those tokens, and the idea is, to put it mildy, counterintuitive.


Once the attacker knows they can steal the session ID, and how long it will be valid for. It’s just a matter of running a script to do that all that as fast as possible.


That’s one possible scenario.

The correct way to evaluate security is to consider many different scenarios, and consider how your mitigations affect the likelihoods of all of them, weighted by their impact.


> The correct way to evaluate security is to consider many different scenarios, and consider how your mitigations affect the likelihoods of all of them, weighted by their impact.

NO! that's merely the FIRST step in evaluating security. The next steps are: What sort of threat am I attempting to prevent? How do my mitigations impact usefulness of the product on the whole? And most critically: Are my users better served by adding this security measure?

That is much more likely to be determined based on what the product/tool is trying to achieve. Which brings us right back to: Security is a tradeoff.

There are situations where I think short sessions make sense (ex: changing billing/contact info). There are also situations where short sessions are huge negatives (ex: how well is slack going to work if you get logged out every 15 minutes?)

My proposal is simple: Actually do the damn evaluation, instead of just blindly siding with "moar security = moar better!"


> My proposal is simple: Actually do the damn evaluation, instead of just blindly siding with "moar security = moar better!"

Agree 100% on this.


I will posit this proposition:

Short sessions are thinking like physical security. Someone can pick any lock, the question is will it take long enough for a human to interrupt the attack?

It doesn’t matter how long a computer has access to the key. How fast it can cause damage is limited by the speed of light, not human fingers.

If you ever leave the credentials where they are accessible, they can be used even if the session expires in three seconds. And if they’re being seen (in motion) why would the session be expiring in three seconds?

Machines hack differently than humans. Don’t think in human timeframes or orders of magnitude. That will let you make mistakes you can’t afford to make.


> Machines hack differently than humans.

An awful lot of hacking is done manually, by humans. For many scenarios, considering human timeframes is completely reasonable.


We live in a world where real time advertising auctions happen. If you think there’s something about that which can’t apply to organized crime you’re gonna be in for a rude awakening when your systems start to fail en masse.

I’ve had to replace a credit card twice for suspicious activity (once cost me my favorite domain name, which is still parked). There were no major charges in either case. One charge at a business I’ve never been to.

Some people get a card and use it. Some immediately sell it after proving it works. That means a clearing house. A food chain. That will get more automated, not less.


Sure, there are some attacker activities that are highly automated. That’s not what we’re talking about here. We’re talking about compromised temporal session tokens, which are frequently harvested by manual action, and in those scenarios, thinking about human timeframes is very useful.


> * Makes it less likely that when an attacker obtains a session token that is is unexpired.

But gives them more opportunities to acquire such token.

> * Gives the attacker less time to use a valid session token to move laterally into (potentially) unfamiliar infrastructure.

Only if tokens lifespan is counted in milliseconds. Otherwise, the attacker will refresh the session token as soon as they get it, and continue to do so. An active session token can be thought as having arbitrary long lifespan.


> But gives them more opportunities to acquire such token.

Maybe, if the session tokens are being acquired by improper logging. If the tokens are acquired via the user’s cookie store, for instance, the total number of session tokens is going to be the same — the user is going to use the applications they use, and the stored session tokens will reflect that.

> Only if tokens lifespan is counted in milliseconds. Otherwise, the attacker will refresh the session token as soon as they get it, and continue to do so. An active session token can be thought as having arbitrary long lifespan.

If the session timeout is based on inactivity, not total life time.

There are instances where session timeouts/forced reauth are useful and where an attacker could not endlessly refresh the token.


How likely is it for an attacker to get access to the user's cookie jar at a single instant in time only?

> There are instances where session timeouts/forced reauth are useful and where an attacker could not endlessly refresh the token.

If the token isn't refreshable without a "real token" the "real token" will probably need to be somewhere the attacker can get it anyway.


> How likely is it for an attacker to get access to the user's cookie jar at a single instant in time only?

Depends on the dwell time the attacker has until detection and eviction, but generally speaking, in the scenario where session tokens are being harvested from a user’s workstation or something like a jump server, the attacker is going to be able to access stored session tokens from the most recent login prior to their gaining access and any that occur during access. In any case, shorter session tokens are going to result in less access for the attacker. There isn’t a scenario that results in more access, and only absurdly contrived scenarios that result in the same access.

> If the token isn't refreshable without a "real token" the "real token" will probably need to be somewhere the attacker can get it anyway.

That may be true of ticket-granting-ticket schemes, but not for single/multifactor authentication for ticket schemes. Both scenarios exist and need to be accounted for appropriately.


Why wouldn't the system require each refresh of the session token to require additional authentication? Then a stolen token can't easily be refreshed.


UX for common applications is poor. Would you want to have to enter your email password every hour or two, for instance?

I’d argue for truly critical infrastructure, short-ish session times can be useful, but for most applications they do more harm than good and better alternatives exist. For instance:

* Enforcing step-up authentication for access to sensitive application functions.

* Forcing re-auth based on behavioral analytics (for instance, if the user normally accesses an app 8 - 5 Monday - Friday from the United States, but presents a session token on Saturday afternoon from Russia, maybe force a reauth.)

* For enterprise apps, SSO where you may be forcing an authentication event every shift, but at least it is one, not one per app.

But, of course, there is no one right answer because there is no universally applicable or agreed upon threat model.


I think this debate is too abstract to be useful. Certainly, there are some cargo cult practices where various forms of "key rotation" has gotten out of hand for no real benefit. But there are many valid scenarios which depend on different overall system characteristics. I'll just revisit a few.

1. Actual encryption key rotation. This is one grandparent of modern session limits, built with an assumption that keys might be compromised by cryptanalysis rather than stolen by malware-infected clients etc. Rotation reduces the number of ciphertext examples made available to perform the cryptanalysis, thereby reducing the risk of successful attacks. It also may reduce the scope of compromised data if and You may have a UX requirement for preventing idle sessions to stay open on clients because people are in a shared space with client devices, i.e. HIPAA touches on this for medical systems. only if key exchange is still considered secure, such that holding one key does not let you decode traffic captures to determine earlier/later keys in a transitive attack.

2. Forcing idle logouts with less dependence on client systems. This is another grandparent of modern limits which is sometimes cargo-culted. The underlying risk is people logging in and then abandoning a terminal in a shared space, so you want to reduce the chance that someone else can walk up and start using it. You really want the client system to participate in this, i.e. not just end a session but lock/clear the screen so previous data is not on display for unauthorized observers. But it is often seen as defense in depth to also abort the remote session credentials so that future actions are unauthorized even if the client software has failed to do its part, such as if it has crashed/hanged. This one has the weakness you mention that a malicious client could do endless refresh to prevent the detection of an idle UX by the server.

3. Forcing reauthentication periodically or for high-value actions. This is more paranoid than the prior idle logout concept, and actually demands the user reestablish their authenticity during an active session. This has been used historically as another kind of defense in depth attempt to verify user presence with less trust of the client system. But it is also used as a UX ritual to try to get the user to pay attention as well as build an audit chain for their consent to a specific action. Designers might tie this in with MFA/hardware tokens, to get different kinds of user-presence detection throughout a session.

4. Decentralized "web-scale" architecture and caching. In a traditional system, a session key is little more than an identifier and might be checked on each use, i.e. each web request, to determine actual client identity and privileges with a server-side lookup. But as people developed more distributed services, they have often shifted to embedding more of this info into the session "key" itself, as a larger signed blob. Different service components can decode and operate on this blob without having to do synchronous lookups in a centralized session database. This is where automatic refresh still serves a purpose, because it is during each refresh handshake that the centralized check can occur and potentially reject the refresh request or issue modified privileges. This rotation rate defines a system-wide guarantee for how long it takes certain changes to take effect, e.g. disabling a user account or changing the roles/privileges assigned to a user. I have seen this approach even in systems where the session key is a mere ID and cache invalidation could arguably be handled in the backend without forcing client's session keys to rotate. This seems like cargo culting, but is useful if the system operator wants to retain the option to use signed blobs in the future, and so does not want to allow client programmers to assume that client session keys are stable for indefinite time periods.


> A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

I'm a bank robber, I want to steal your money without you knowing so I'm not caught.

What would be a better way to do that? Withdraw $1000 immediately or to spread out that withdrawal over several months.

A short token forces the $1000 withdraw immediately. And one common way these tokens are compromised is a scammer getting Grandma to open the developer console so they can "fix" things.

> The vast majority of ways to compromise a session already give you access far beyond that session itself (ex: you have user access on a local machine, or physical hardware access, or you're an admin who manages that user, etc/etc/etc). So an expired session is, at most, a small speed bump in those cases.

Or you are employing the common scam above, screen sharing under the guise of helping.

Granted, some of the calculus needs to be "what type of app is this? What does compromise mean?".

> The most secure application runs completely isolated, with no input or output, and is totally, utterly useless. But no worries - it's secure!

I didn't take this as what the op was saying.

Security works in layers and good security imagines that other layers don't exist or might have been compromised. In your ship analogy, that's adding a second hull, putting airtight sections between hull locations, and having lifeboats.

You wouldn't eject the lifeboats because "we have two hulls, what could go wrong!"

The actual cost of doing this is generally developer time.

There are certainly practicality limits, but in general a layered approach to security is how you both increase security and decrease compromise impact.


This is just wrong on so many levels.

> A short token forces the $1000 withdraw immediately.

No, a short token forces the attacker to continue making requests, but otherwise places very few limits on what they can do with it (since these tokens are almost always something like "15 minutes since the last use")

> In your ship analogy, that's adding a second hull, putting airtight sections between hull locations, and having lifeboats.

This is PERFECT! Because it hightlights exactly the trade off I'm trying to point out. Fucking no one uses double hulls except for oil tankers, and they only do it for a very specific reason: They want to stop oil from leaking out.

They pay a HUGE cost for it, but it happens to be worth it for this very specific trade. Here's the tradeoff:

---

Double-hulled tankers have a more complex design and structure than their single-hulled counterparts, which means that they require more maintenance and care in operating, which if not subject to responsible monitoring and policing, may cause problems.[2] Double hulls often result in the weight of the hull increasing by at least 20%,[3] and because the steel weight of doubled-hulled tanks should not be greater than that of single-hulled ships, the individual hull walls are typically thinner and theoretically less resistant to wear. Double hulls by no means eliminate the possibility of the hulls breaking apart. Due to the air space between the hulls, there is also a potential problem with volatile gases seeping out through worn areas of the internal hull, increasing the risk of an explosion.[8]

---

> The actual cost of doing this is generally developer time.

No - the actual costs are usually actual costs, paid by all members of the system. Developer time is just the most obvious up front cost. There is a cost every time a user has to re-authenticate. There is a cost in resources to handle the extra authentications. There is a cost in complexity to maintain and extend the system doing authentication.


>There is a cost every time a user has to re-authenticate. There is a cost in resources to handle the extra authentications. There is a cost in complexity to maintain and extend the system doing authentication.

I think this is definitely where the security trends in modern IT have gone very awry, as it _is_ extremely annoying to be an end user having to work with modern IT security practices. Off the top of my head:

- MFA everywhere means that if you have any issues with your alternative authentication devices, you are completely locked out of your work and probably your life until you get that resolved

- broad and vague block geo-based block lists means users just flat out cannot access resources depending on where they happen to be, which means service desk tickets, investigations, and ultimately people who cannot access non-sensitive data they should be able to just because of where they are physically located

- CAPTCHAs can lock entire classes of persons out of specific services as the CAPTCHAs aren't easy for these classes to perform on demand

- SSO/SAML authentication pages take you on a whirlwind tour of dozens of randomly generated authentication pages meant to establish and pass your authentication back to a central location, and it makes it impossible to tell from the URLs themselves whether or not it's suspicious or not unless you know the specifics of the system in use; this is particularly bad because this is exactly what it will look like if you click on a spam site in a search result or a compromised webpage. how is a user supposed to know when they've accidentally gotten tricked into a compromised authentication page? the uniquity of SSO for logins is nice, but it also means that as a user, I expect that I can be taken to an SSO from just about anything, so how am I supposed to know if the entry point from page X is legit compared to the entry point from page Y?

- a corollary to requiring multiple authentications even from the same device (looking at you Microsoft...) is it creates uncertainty as to when I should expect to have to sign in; if opening a link to a report requires me to authenticate or just accessing an internal web portal requires additional auth, why should I be suspicious when my colleague's account gets compromised and an attacker sends me a link saying "hey, we need to respond on this form by EOD; don't have time to explain in full, but it's pretty straight-forward. I'll follow up in a few hours when I'm done with another item"

- Edited: another corollary with SSO means that getting auth'd once means you get auth'd a lot. While you should need to configure additional security and checks on more sensitive services, since you're already auth'd through the main means of identification, it's often trivial to get the access by normal means or to social engineer access

It really sucks to be an end user in such environments, and it's just too easy for IT security to absolutely lock out legitimate users who are following the policies as best they can with earnest intent.


> It really sucks to be an end user in such environments, and it's just too easy for IT security to absolutely lock out legitimate users who are following the policies as best they can with earnest intent.

Yup. I'd add to your list: multiple corporate auth systems/domains that are supposed to be in sync, but sometimes aren't. When that breaks, you end up having to turn the Internet off to even log in to your work computer, and find yourself flying out to another country so the IT people there can fix the mess, and this is cheaper than them spending a long time trying to help you remotely, while you can't do any work.

Don't ask me how I know this.


> if you have any issues with your alternative authentication devices, you are completely locked out of your work

You have printed the rescue codes when prompted, and have put that physical piece of paper into your wallet, haven't you?


I have ~1000 accounts, ~200 of which are used for work occasionally. Their 2FA recovery methods vary, and some have no recovery method. I'd like to say my wallet is not large enough for the printed codes, but only about 5 accounts even offer backup codes, considerably fewer than the number of 2FA accounts.

Besides, my last Gmail account for work appeared to be locked to my phone and didn't accept backup codes, and was OAuth master to a number of other accounts.

(For real: I lost access to that Google account permanently when my phone screen stopped working due to an internal fault. It wasn't really a problem and I didn't pursue it fully because I left the job soon after anyway, but the fact I couldn't regain access during that time despite copying the broken phone's content to a new device which successfully transferred the 2FA codes for all other accounts, was striking. It's why I don't use Google for id when there's another option. I tend to use GitHub for id at the moment.)


That's a nightmare process for any normal user. There's no way the vast majority of people are savvy enough to do this correctly.


Which part of the "click print, cut or rip out a corner, put it in your wallet" a nightmare for a normal user? (I'm not one, can't judge.)


I used to do something like this with my passwords. A folded, printed sheet with tiny font holding my accounts and passwords that I carried in my wallet. Eventually I found there wasn't enough space even on both sides of an A4 sheet with the tiniest legible text, and a full sheet was hard to fold small enough. The text got mangled in places due to crushing.

I think normal users don't have a printer or a nearby print shop in 2023. (For those with an inkjet printer, the ink has dried and the head seized up since they printed something last year.)

Many people, who I assume to be normal, don't have a wallet separate from their phone these days. They use virtual payment cards on their phone and store paper notes and if necessary cash and a payment card inside their phone case. Not useful for "lost my phone" recovery codes, terrible for "my phone was stolen" as it reveals too much, but maybe good for "my phone broke".


The part where you actually have to do it. And the part where you remember where you put it. What happens when you lose your wallet? Or when the paper gets crumped up and ruined? Or wet? Do they put it in a lock box? What good does that do them when they are in another country? What if they reset their password and have to reset their codes but forget to update the paper? Or what about when you have the old and new codes and can't remember which piece of paper you put on top? How may of them actually go back and verify that the codes work and the process hasn't changed and they can successfully recover their accounts? How about the 50 other accounts they have all forcing their unique and totally different 2FA recovery process that isn't like any of the others?

I keep my backup codes in a GPG encoded document with copies of it in multiple places. It's a big pain in the ass but I know I'm covered. For the vast vast majority of people this is more theatrical bullshit they won't bother with.


Actually have to do it: I see, but really, dear real user, you are adept at printing pages, you do it quickly and masterfully, just click the button now.

Remember where you put it: the answer is trivial and always the same, "your wallet". The tech support will remind you to look in your wallet if you come to them with your problem.


Yup - let me just go get my "wallet binder" from the storage yard I have to keep it in after adding 800 pages of backup codes (which is literally not an exaggeration - I have more than 800 active accounts between personal/work/contracting).

Let me just bind this fucking book over here, after I ran out of printer ink twice while printing it, and shove that right on into my little wallet flap.

Perfect! Why didn't I think of this sooner!


Most people don't have printers on standby. Most wallets have lifetimes way shorter than those of account rescue codes. Everything else in a wallet - government-issued IDs, bank cards, etc. - has lifetimes way shorter than those of account rescue codes.


> Which part of the "click print, cut or rip out a corner, put it in your wallet" a nightmare for a normal user? (I'm not one, can't judge.)

- click print - you lost 50%+ of your users there, as approximately nobody has a printer on stand-by at home; if they have it at all, it's a hassle to turn it on, and half the time it's probably broken (ink dried out, etc.)

- put it in your wallet - where? Also, what if you lose your wallet? At least with everything else in it, there's a reasonable process of recovery, usually involving visiting banks and government institutions in person. No such thing for webshit MFA.

This is worth repeating: literally nothing else in your life works like this. There are no other documents that you need to hold on to for a decade or more[0], that are in any way important, and loss of which can't be recovered from. It's an impossible ask for most people, because nobody has habits or even required perspective for such use case.

(What I usually hear from people is, "you should put it your safe". But I don't have one, and I've never (that I know of) met a single person who owned a safe either. It's some American thing, I believe.)

--

[0] - My Google account rescue codes are over a decade old now. I had to use them last year. It's a miracle I still had that piece of paper in my wallet - I've long forgotten about it, but it happened to be put next to a single-page reference for time travelers, so it got transferred to new wallets along with said reference.


Upvoted for the sharp sarcasm that's dripping from this comment.

It was sarcastic, right? Right?


> dozens of randomly generated authentication pages

I have never seen an authentication page be randomly generated.

Elaborate?


I'm explaining it poorly; think about the urls for common authentication redirects and how it usually looks when you go through an SSO portal.

Probably you start at a page like:

sso.company.com

When you try to access a service, you're taken to probably something like

sso.company.com/auth

If your company uses Microsoft or Gmail, very likely before you reach your SSO login, it may temporarily flash MS/Google's auth page briefly before redirecting or loading the elements for your company's SSO portal

After login, probably it will then load something like:

saml_provider.company.com/autheticate/redirect

saml_provider.company.com/[some generated string in the url]/some_action_page

and depending on how it's configured, you might go through a few of those types of URLs with no direct connection to your company or the resource you want, but it's just the authentication process passing your authentication from service to service until finally it figures out to return you to your originally requested resource and it passes an auth token. †

The reason I think this is frustrating is that it's very fast, no user input, but it is observable by the user; you will see the pages loading and the long urls, sometimes some basic info is printed to the page with simple HTML, but the user has no idea what's going on.

Combine this with the fact that this is exactly what happens when you accidentally click on a spam site from search results, and my problem is "how can a user possibly know if this redirect spiral is a legitimate authentication process or if they've accidentally clicked on something compromised?"

† sometimes these auth-spirals don't even take you to the correct item you were trying to get to in the first place, it takes you to a generic landing page...Reddit is guilty of this from my experience where logging in to subreddits that are flagged NSFW will redirect me to the reddit front page instead of back to the subreddit I initiated the log in to check


Some Lastpass admin page redirects me no joke like 10 times.


exactly; if you know what these systems are doing it's easier to be comfortable with it, but it's still very annoying/long for every single login.

and we've done such a good job of training users to detect suspicious behavior, and here we are using the same suspicious behavior that spam sites use, it leaves me with a frustrated feeling.


> since these tokens are almost always something like "15 minutes since the last use"

No? They’re almost always “15 minutes since the last use, or 5 hours since it was created”, which results in a completely different security picture.


Why would I focus on your grandma when I can get everyone’s grandma and spread the theft out even more?

Sit in a coffee shop and steal credentials for a local bank all day.


I couldn’t agree more. Security is almost always user hostile (speaking from a UX perspective). I am NOT advocating that we remove security for obvious reasons (a hacked app is also user hostile). HOWEVER - if we can just acknowledge that security is antithetical to an easy to use, user friendly app then we can make appropriate decisions moving forward.

One of my favorite sayings is “if you are not careful, you are going to secure yourself right out of business”. Ease of use is a real thing, and if you don’t figure out a way to make a secure app that is also very easy to use, someone else will and your pleas of “but it is not secure” are going to fall on deaf ears.

Honestly, baseline security is part of “essential complexity” https://ferd.ca/complexity-has-to-live-somewhere.html. Essential complexity can’t be removed, but it CAN be moved around. Right now we are talking about the essential complexity of managing security through managing a user’s session length. The advocated solution is to make sessions short so that they expire quickly. This seems to remove all “accidental complexity” so that we are only dealing with the essential complexity. But this is misleading. There is still complexity in juggling those short sessions. As a designer you think this solution is simple, but what you have done is moved that essential complexity over to your users. THEY must now manage the impact of short sessions. The complexity does not go away, you just moved it to your users, hence it is user hostile.

The trick then, if making the best product is important to you, is to figure out a way of letting users have long sessions but managing it on your side. This seems to argue that you are making your system more complex by adding accidental complexity (which is generally a bad thing). But really what you are doing is moving some essential complexity away from your users onto you. You lower their burden. This is how you make competitive applications.


Is there a term for this kind of thinking, or a type of job role in security that focuses on problems like this? Are there any professional 'strategic rearranger of security complexity' or 'security UX champion' jobs out there?

This seems like it could be a really valuable and maybe also fun role, if one can find an org that has made room for it.


This isn’t a security mindset, it is a product development mindset. You run into problems creating these situations like we are discussing when roles across the company diverge and no one is responsible for the big picture. The security guy doesn’t care about product management, and the product guy usually doesn’t see the value in security. Good founders get this.


A lot of the same problems come up later in the software lifecycle, though. I wish considerations like this could be a factor in purchase and integration considerations.


> This attitude is cancer. Let me throw another quote at you

Nonsense, defence in depth is a core security principle. You should not rely on a single control to protect you.


> Nonsense, defence in depth is a core security principle. You should not rely on a single control to protect you.

And you should not prioritize security over the goal of the product.

The conversation is a discussion of relative value and tradeoffs. Does increasing security make the tool as a whole worse? Sometimes - the answer is yes.

I have a nice set of front windows, but that means a risk of someone breaking through them. I accept that risk for the windows - the extra light and visibility is well worth it, and the windows are not the only way in. Compare to short sessions.


The idea of “adaptive security” is compelling.

E.g. my bank makes me type my password and sends 2fa codes when initiating/approving wire transfers… even when I just logged in a minute ago. If I’m doing 2 wire transfers in a row, it doesn’t care, it still has me fully reauthenticate for every wire transfer.

But I’m fine with that because moving money is something that I’m willing to accept however many roadblocks are thrown at me.

But do I want that happening when I go to post a tweet? Absolutely not.

In other words, let’s adopt the concept of refreshing authentication upon particularly sensitive user actions and have lax requirements in other cases.

We don’t need to think of sessions as “logged in” or “logged out”. It’s possible to design a system where you’re always logged in forever, but you need to reauthenticate based on certain rules or actions given the context of the app and risk/threat.


Agreed - this is a much better approach. The "session" that can do the normal daily tasks for users should last as long as you can make it.

The "session" that can do things like change 2fa/billing/contact-info (decidedly not-normal things) should last for exactly as long as it takes you to complete that form, and should require your pass/2fa again to touch.

This is currently Google's approach, and I find it much more sane.


GitHub does this too.


My bank does that but also:

* logs out after 10 minutes of inactivity - so doing anything that involved switching between accounting app and it is annoying * not allowing more than one tab open at once - that's just stupid in its entirety.


> But I’m fine with that because moving money is something that I’m willing to accept however many roadblocks are thrown at me.

Really? I'd change banks over that. If I log into my e-banking website, the main activity I'm going to do is pay bills. I would absolutely not tolerate having to jump through hoops to do it.


This is our business account so we're moving anywhere from $20k to $200k at a time.

In the spirit of adaptive security, someone moving $100k will probably be fine dealing with a couple extra password / 2fa prompts... but I agree it would be annoying to deal with this for day-to-day (consumer) bill pay. A bank could throw fewer roadblocks when paying a $500 invoice vs. a $50,000 invoice

The workflow I described also is part of a dual-approval model where a finance person sets up the transfer and it's approved by a 2nd person (who's presented with a bunch of authentication/password/2fa prompts, almost to a fault).

But again, I'm fine with it because it's large amounts of money in corporate bank accounts. But yea, agreed it would be annoying and should be toned down in a consumer use case.


> I have a nice set of front windows, but that means a risk of someone breaking through them. I accept that risk for the windows - the extra light and visibility is well worth it, and the windows are not the only way in. Compare to short sessions.

Okay. I will compare using house to short sessions.

Short sessions is like having house with every doors having a lock and I need to use the keys to get into different room, if I stay in one room for too long, including the shitter. I also need to use the keys to open windows and the oven. And developer going "well, you shat yourself ? That's your fault, should've had keys on you at all times".

That's what short sessions are. Delusional security clowns ignoring usability. It's less than security theatre, it's security circus.

Requiring re-auth to pay some money or delete something important is reasonable stance.

Requiring re-auth few times a day just to browse data in the app is not,


That is a straw man argument. Nobody was saying security should be prioritised over the goal of the product.

Security is just another non functional requirement (mostly) of a product.

To obtain good enough security, defence in depth is still a good principle to follow. It means you are not putting all your eggs in one basket. It often means that each individual control does not have to be perfect or massively over engineered.


So in this case, when short sessions are a clear negative for a lot of products, and we have existing examples of HUGE enterprise companies that have agreed and adjusted those sessions to be much longer for most cases...

I would argue that you are arguing to prioritize security over the goal of the product. Right here and right now - you are literally doing it.

> To obtain good enough security, defence in depth is still a good principle to follow.

I don't disagree! I just think that each "defense" needs to actually be considered on the whole, not as just another bonus to security. Short sessions SUUUUUUUUUCK. They make your product shitty. Users hate them. They don't add a ton of security.

Are there products that should still have them? Sure. Probably lots of products in VERY specific places. Should they be the default everywhere? Sure as hell not.


Don’t prioritize security if the root cause of the security breach is someone is getting access to the session tokens not that the server session tokens are arbitrarily too long. that attack might happen once ever and it doesn’t really matter if they have five or 10 minutes you’re still screwed because they can just go get another session token next time and be prepared.

Optimize the application to run the best for all the users first and then adjust the security implementation as necessary. Otherwise, you could DoS yourself by trying to be too secure.


"Optimize the application to run the best for all the users first and then adjust the security implementation as necessary. Otherwise, you could DoS yourself by trying to be too secure. "

I think it depends what kind of buisness you are running and what a security breach means for you or your users. If it is a hobby forum, well yes, UX matters more. But if you screwed up security for anything with big money related - you probably want to prioritize security first and not after you lost some billions.


Zero days exist almost every day and there there’s nothing you can do about it. So make sure that what they steal if they do steal anything is a bunch of encrypted envelopes instead of raw pictures.


Yes, however, in the limit it's also the case that you can have either secure or useful system, but not both.

And then, since the top comment was encouraging a wider view, there's even wider view: business needs. Truth is, short sessions make me viscerally hate the product, creates a desire to avoid using it, and becomes a factor in decision to switch to a competing offering when such possibility arises. Or to not switch - one big reason I'm still using my current bank account instead of another one I had to have to get better mortgage rate, is because the bank operating that other account has short session times and associated pseudo-security annoyances.


Really, the issue here is that security is never treated quantitatively. At least in my experience - are there examples of quantitative security?

In some sense, the problem is that the goal is zero controls breaking, but of course, that also provides no information on security. Intuitively, it would seem that parameterizing security (in a diagnostically useful way) would also require a number of quite different measures. For instance, it matters who (how sophisticated and motivated) your attackers are. And some of the parameterization would be based on use - every time you force someone to type a fixed password, that password is more exposed. Combining all these quantities would just be Bayes-based.

I think an exercise like this could be instructive. For instance, short sessions mean more authentications and both usability and failure modes. But they also mean that an attacker would need to wait until the next session, or have a smaller chance of hitting a session. Again, these are quantifiable, at least in rough terms. And this kind of analysis does expose some additional questions: eg that short sessions make no sense without guaranteeing the security of the client system (ideally by reinstallation from known-good reference).

At least in my experience, "security people"


The closest security generally gets to "quantitative" techniques is in applying risk management to threat models.

But the way risk is managed in the industry (multiplying likelihood and impact) is completely incoherent and voodoo. See the book "How to Measure Anything in Cybersecurity Risk" [1] for a good explanation of why it doesn't work and better ways to do it.

Which is a long way of saying, no, security doesn't use quantitative techniques mostly, but it would be possible if people understood how to measure and manage risk properly.

[1] https://onlinelibrary.wiley.com/doi/book/10.1002/97811198923...


But, you need to make sure the extra controls are actually providing depth.

I think in some (very narrow) cases, short session times and aggressive reauth do add depth and can be an effective part of a security program.

But, all too often, defense in depth is used to mean:

* Vendors In Depth, whereby every security tool under the sun has to be deployed (or at least purchased) to achieve “security”. Or, worse, the Noah’s Ark model where you buy two of everything.

* Uncoordinated and/or seemingly random layering of controls that either don’t add to the overall confidentiality, integrity, or availability of the system being protected. Or, worse, are positively counterproductive and actively reduce the real-world security of the system.


> Vendors In Depth

Vendors will always love the phrase 'defense-in-depth' more than they care about assessing whether additional layers of tooling and controls actually provide more defense, because a vague appeal to defense-in-depth is a great way to justify purchasing more security software.

It would be naive to think this doesn't affect the volume of research produced promoting and emphasizing the importance of defense-in-depth either, or how frequently papers on basically any kind of attack end with something that means 'this is why a defense-in-depth approach is needed in XYZ area' even when defense-in-depth is at best incidental to the substance of the paper.

You can trivially tack that on to the end of pretty much any paper about any exploit, which raises the question of how meaningful an observation it really is.


> A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

Why "usually does not"? The stolen hard drive, leaked data, etc. can happen at any point in the future....one minute to one year from now.

For most of those times, a short session will prevent the attacker from exploiting it.

> The vast majority of ways to compromise a session already give you access far beyond that session itself

For the local system, yes. But not the for the remote system.


> For the local system, yes. But not the for the remote system.

This is fair, although you've picked a specific case where the local system would not likely access the remote system again after compromise (because theft removes access for the normal user) and an expired session might be helpful as security.

But the other thing about theft is that it also immediately alerts the user, and having a simple "Sign me out everywhere" button is a more robust solution that causes much less user pain and mostly accomplishes the same result.

As for

> one minute to one year from now.

I'm not arguing for indefinite sessions. I'm arguing against short sessions. Rotate it after 30 days if you want (or 5 days, or 1 day - just don't do it every 15 minutes). It'll catch 99% of the cases you're thinking about solving, and it's literally 3000 times less annoying for your users while being nearly as secure.


> theft is that it also immediately alerts the user

The user may or may not know of theft or leak.

And even if they are aware, they may not know remember every remote system they were logged into.

> Rotate it after 30 days if you want (or 5 days, or 1 day - just don't do it every 15 minutes).

So we've gone from arguing that short sessions doesn't work, to arguing that it works for such a large % of the cases that it could be relaxed.


I don't think my argument has changed a whit - Short sessions cause more pain than they solve. They are a bad security tool for almost all products.

Arguing that short sessions are bad is not the same as arguing that rotation never has its place. Rotation can provide some benefits.

My argument is that EXCESSIVE rotation (aka: short sessions, the whole freaking conversation) is folly.

It's a bad decision usually implemented without thought or understanding (it's on the checklist...), which has a high cost to users, and actively degrades the product.

In return for the costs of short sessions, what are you proposing that your users gain?

Because personally, logging in every 15 minutes for the rest of my life is a god damn travesty of an exchange to make to cover me on the one case where my laptop goes missing. Especially since that's not a very common vector for account theft. It's SO much more likely someone just calls the help center and claims to be me and gets in just fine.


Stolen hard drive is rarely what you defend from, and arguably might completely not matter if say their short lived session is dead but long-lived password manager one is up.

Exploits owning software on machine are far more common than machine itself being stolen.

I'd also argue that tying re-login to the sensitive actions is far better way to fight it. Basically have long session for nondestructive actions but short for any potentially harmful ones like changing payment options.


Stolen hard drive = Stolen laptop/phone/tablet


Or decommissioned drives.

I’ve seen businesses put used laptops straight up on second hand market, without barely doing basic formatting of the drives, unencrypted. Even less so on private market.

Heard from security friends also that there are examples of attacks that succeeded thanks to drives found in trash outside enterprise.


The shorter tokes are useless by themselves; but if combined with proactive security, they can make a real difference. Some concrete examples, based on the places I worked in, which provide support for at least not having infinite expiration:

- Attacker steals Google token from regular employee.. What can they do? No, they can't "make themselves admin, or wire all your bitcoin to their account" because such things are not available for regular employees. Nor can they use email to take over account because all stuff that matters uses corporate SSO without automated password resets. So all they have access to corporate google drive, which has few terabytes of not so-valuable-stuff + important documents somewhere.

With long-living token, the attacker has plenty of time to browse around and download valuable stuff, perhaps even do it slowly so it can fly under radar. But with the shorter token, this does not work: downloading everything will take too much time / trip the alarms, and manual browsing is not fast enough. (I can imagine some sort of sophisticated AI system which auto-downloads interesting docs, but I am sure most attackers have nothing like this)

- Attacker steals AWS credentials -- again, they either have to be fast and get detected (there are alarms on unusual high download rate) or go slow and worry about token expiration. Even if they manage to steal important data, at least the expiration will force them to use faster methods more likely to trip the alarm.

Even with the full machine compromise, the short-lived tokens are useful. A npm package which steals all tokens once, at the install time, will likely go undetected. A npm package which installs a persistent backdoor will be much easier to detect.. So having a long-lived token makes attackers' lives much easier.


> You haven't actually addressed either of the author's points, though. Namely: 1: A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

The worst hack at a company I worked for was caused because an old server that was supposed to be decommissioned was left plugged in and connected to the network… a good time later, a hacker exploited a vulnerability in an unpatched package on the machine and got in (since the server was supposed to be decommissioned it was not patched when the rest of the machines were patched).

The hacker used old credentials on the machine to gain access to other machines in our network. If we had been rotating credentials more often, this would not have happened.

We had systems in place to patch and maintain our machines, but once the machine got lost in our inventory management system, it was forgotten about. It wasn’t monitored anymore, and other protections lapsed.

This category of exploit is prevented by credential rotation, because this type of exploit is only possible if a system is neglected.


> This category of exploit is prevented by credential rotation

This category of exploit is prevented in a huge number of ways - not a single one of them is "make my user's cookie/token sessions 15 minutes long"


It could also have been prevented by… not using user credentials on a server to access services? By doing regular checks on your inventory? By using IDS on your complete network? But not trusting your network? (Zero trust security). By regularly simulating attack scenarios and then check your readiness? By implementing ISO27002?


> 1: A short compromised session is still a compromised session.

But an expired compromised session is not a compromised session.

If a session cookie/ID is compromised after it has expired it is no use to anyone.

But if a session cookie/ID has no expiry built in, then it remains an open compromise risk forever.

Situations where we can expect the session identifier and the context in which it is stored to reasonably be under the control of the user for a short time - while they are actively using a site, say - but where it might not remain under their control after they've finished using it, are a real part of real threat models.

If I have some cross-site-request-forgery phishing scam that works on users who are logged in to a bank website, that requires them to have an active session at the time they open their email, or visit my site, or whatever... if their banking sessions time out after 15 minutes of inactivity, they are less vulnerable to that threat than if their browsers keep them logged in all the time.


While security and usability are in some sense opposite poles of a continuum, it’s not strictly a matter of tradeoffs. It’s easy to imagine a decision that considerably decreases the usability of a system without appreciably increasing its security or even decreasing it.


When I think of short session expiry I think that access_tokens have a limited timespan so if they get accidentally logged the remote end or the person is fired their access lapses. This is a good idea.

If we are talking about bounding the lifetime of a refresh token, I agree with the articles, but that is not what I think of when I think about credential expiry. Credentials that you send to third parties should be short lived. I think without that underlying distinction the article is dangerously confusing. Time-bound credentials have many useful features.


> Security is ALWAYS (fucking always, yes really - fucking always) a tradeoff.

It’s helpful to me that ACLs prevent me seeing other users’ files. I’m not trading anything off that I’m aware of.


> I’m not trading anything off that I’m aware of.

Really? Really???? You're not, say... having to manage ACLs, and having to run a system that can enforce ACLs?

Because both of those are tradeoffs. They might be "completely sane and reasonable" tradeoffs! But they are still tradeoffs.

Also - in a more blunt way: Simply not being able to see other peoples files was considered a pretty big negative in some computing crowds when it was first enforced. That's a huge tradeoff: You're sacrificing visibility for privacy.

I think that turned out to be the correct tradeoff in most places - but to be clear - you absolutely are losing a feature to gain a feature.


> Really? Really???? You're not, say... having to manage ACLs, and having to run a system that can enforce ACLs?

In this case, it's a more user-friendly tradeoff, because those burdens can be borne by the infosec staff and other IT staff rather than end users. I'm happy when I can arrange things that way for my own users for sure.


> Security is ALWAYS (fucking always, yes really - fucking always) a tradeoff.

> The most secure application runs completely isolated, with no input or output, and is totally, utterly useless. But no worries - it's secure!

Yeah, as the Rust saying goes “The safest code is code that doesn't compile”


Librewolf has an option to delete cookies on exit. Then you are as secure as possible.


Except if the cookie is copied elsewhere, then that cookie/session needs to be expired?

(but yes, expiring locally reduces the window for copying that cookie)


Pretty sure every browser does


This article makes so many unfounded assumptions in order to make a point.

> Presumably they are going to immediately make themselves admin, or wire all your bitcoin to their account.

Attackers running scams like a sophisticated BEC will lay dormant for long stretches of time to gather information before acting. Sure, they can export the emails and set up auto-forward rules to maintain visibility when the session expires, but they've now made a lot more noise to detect on. I've seen threat actors view mailboxes once a day for months before they launch this scam.

> Also, it would be better to protect against this by securing the logs or using hard drive encryption.

Of course it would, but often it's not. It's that simple. It's crazy to think the person responsible for writing a secure app is also the one making decisions on endpoint encryption.

> some applications are used strictly within an company from company devices

Some are, lots are not. This reads like someone who has worked in enterprise environments with well funded security teams, not a small business with one IT guy running the show.

> But even then, the attacker could install a browser extension that sends your credentials to them the next time you log in.

This contradicts the rest of the article. Why is a company securing logs, encrypting disks, locking down where users can access apps, but then allowing anyone to install browser extensions?

I agree that short sessions are not the quick fix that some devs make them out to be, but the author is ruling out a perfectly acceptable control based on an imaginary end user setup.


I recently had a BEC on my desk where they had gained access months earlier to a real estate agent's mailbox. They took the time to create perfect forged documents and understand the agent's workflow. Finally it was time to tell a buyer where to send their Earnest Money and the actions were perfect. They made a mail rule that captured the RE agent's outbound message and then sent their own, an exact replica with just the account number changed. Even if the buyer had called to verify the message it would have been fine because the agent really did send a message.

Of course finance people are used to stuff taking an arbitrarily long time (partly the users, partly the system) so they were able to do this several times before anyone raised the issue of MIA transfers.

Oh and we don't know the exact date of the compromise because the customer was not paying for good log retention from microsoft or exporting them to any kind of collector. We were able to uncover a lot but I wonder how this goes for indy RE agents that do everything out of AOL or whatever.


If the attackers control user machine they will have as much information as the user anyway.


Not if they don't have the passwords and 2FA devices to access company resources from that machine, and all of the sessions are expired.


If all sessions expired, then the user doesn't have access to information either - same access as the attacker. There are cached copies of emails too.


Killing short sessions would be somewhat nice from a user perspective, but I think this is unrealistic as others point out. Instead, let's focus on what makes short sessions super obnoxious.

In theory I don't mind a fast and painless re-auth that doesn't hugely interrupt my flow. But the current implementations seem to be actively user-hostile and intentionally annoying.

My company has moved to Okta for most things. Fine. But the session expiration and re-auth mechanism is extremely maddening. It comes out of nowhere.

Right in the middle of doing work, I need to re-auth to {jira, github, google suite, whatever} without any warning. Github in particular makes you re-auth to each org independently even if trying to view OSS material that doesn't require auth to begin with.

And of course SSO flows break anchor tags in URLs so good luck if you already closed the link that had the necessary anchor information.

Then let's talk about Okta in particular requiring three separate pages and clicks for user, pass, otp (because a single form with all three is "insecure" in a meaningful way?). And then not respecting dark mode making an otherwise dark-friendly workflow have flashes of bright light at unpredictable times.

Sure, let's make regular re-auth a thing. I'd even do it once every X hours. But come on.

Let's make the re-auth schedule predictable (ugh google suite re-auth right in the middle of screen-sharing or doing deep work on a doc).

Let me pre-emptively re-auth everywhere all at once in a batch (pointing my finger at you, github).

Let's figure out a way to preserve anchor tags (javascript is a thing, sso sites could capture and sso providers could pass this through).

And dear god, Okta, please just support dark mode on your interstitials. I view that bright flash of light about 5 times per day now and it raises my cortisol levels and hurts my eyes every time.

I know TFA isn't about SSO flows, but SSO flows + short sessions highlight how annoying the end-result of this hodge-podge of auth is for users trying to do their job.


I definitely agree with you that better warnings and more graceful session timeout behavior would be a huge improvement. It's hard with the current auth standards where the IdP is only in the loop during the initial auth; it'd be on each app to gracefully notify and handle soon-to-expire sessions. That's pretty unreasonable across the large swath of SaaS apps and vendors (given how may have pretty buggy/difficult to integrate SAML implementations). When you throw a CASB or proxy like Cloudflare Access in front of everything then you're fully in the loop, but that's a bit much for a lot of use cases.

>Then let's talk about Okta in particular requiring three separate pages and clicks for user, pass, otp That's on your IDAM / Security team and how they've configured Okta/mandated requirements. Okta has fully passwordless, phish resistant, automatic flows with Verify on Mac/Win.


https://learn.microsoft.com/en-us/azure/active-directory/con...

Those “sessions which never expire” do, in fact, expire. It’s just that there’s more to it than time. The author should read up on the reasoning behind primary refresh tokens and session tokens.


They also expire faster when suspicious things happen, such as your IP address changing. Auth providers are pretty quiet about what they find "suspicious" — some are obvious, some are clever.


Session authentication tokens are effectively a form of credential. Best practice is to expire all credentials after a certain amount of time, because you cannot know for sure all the ways this credential might be compromised or when. It does mitigate risk to reduce the time window in which an attack is feasible / when the attacker might get the credential, and when they might start their attack. Just because you assume that every attacker has access to X, or has root on their machine, or is everywhere always, etc, doesn't make it true. Some attackers don't have a big advantage; mitigating their attacks helps security.

In addition, the kind of credential tends to dictate the expiration time. Some don't need to be expired that often. You can also mitigate a lot of the risk by requiring re-authentication to perform riskier actions, like changing email address, or authorizing payment. Combine with other methods like user fingerprinting to force re-authentication if it looks like it might be a different user.

If you're thinking "but we don't rotate passwords as best practice anymore, why sessions?" We basically gave up on passwords. It would help security if people rotated passwords, but unfortunately it's such a pain for the users that they end up doing insecure things like writing them down or incrementing a number at the end, so a compromised password tells you what the next likely one is. If they were randomly generated we would absolutely be rotating passwords still.


CIA. yeeting people out of their shit every ten minutes cripples availability, we gotta balance. like most things where the person making the call is not the person who deals with it, security ppl are often too aggressive about killing session duration.


It's worth noting that many services that offer longer session length also have tools for managing those open sessions and alert emails that geolocate IP addresses using those sessions. I'm sure the big boys do all kinds of statistical (AI if talking to marketing) analysis on session usage metadata (current IP, browser fingerprints, time of day) to try to determine if a session token/cookie has been compromised.

All that is a lot of extra effort and UI work for a web application that doesn't have the same immediate access expectations as a web email client or social media. Besides, many users use a password manager that autofills, so login isn't that painful as long as the 2FA persistent setting is respected.


"Thank got we set the session timeout to 5 minutes, or we would have been compromised" - no one ever.

I am in the 10-hour session timeout camp (or at least 4h, so you only have to authenticate twice a day). Session timeout checks are same sort of checkbox tests auditors (and pentesters) like, like password policy where you have to change it every 90 days. And about as effective.

What's missing in the article is the difference between soft- and hard session timeout (Soft: Reset upon user activity. Hard: session gets killed after X hours regardless of user activity).


It really depends. I would certainly comment less often on HN if it logged me out at 4 hours.

For my online banking it's okay if they kick me out after 15-30 min.


Agreed, but is it about the way you use online banking or is it about security? I think it's interesting to consider because at the point where "security" starts to impact UX, you have to really scrutinize the difference between what's actually making an impact on your security and what's just theater.


It literally prevents me from banking well. It’s not enough time to do banking tasks that require analysis or reading.


Just to play Devil's Advocate, short sessions may actually end up being far less secure when you consider that humans are flawed.

If you force people to log in to a system very frequently, they might get lax with security and do dumb things like stick "post-it notes" to their monitors with their login info out of frustration.


Device malware, which is common, can't steal those.


But a keylogger has a better chance of stealing the password if the user has to enter it more frequently.


Hardware 2FA is common now among professionals, and is finally widely supported.


What I wonder is why do we apply such different standards/expectations to web and non web apps? E.g. desktop Slack doesn’t ask me to log in all the time. Are web app tokens that much more easily stolen? What about Electron apps then?


One difference is that Slack would not work if it logged out automatically after 15 minutes of inactivity. They would lose most of their customers. The only realistic risk to the user from Slack's point of view is that when the user walks away without locking the computer, someone might read or write Slack messages.

Sensitive applications still lock after X minutes of inactivity even on desktop, because the impact of someone else using your computer carries too much risk. One example of this is password managers, where they sometimes require a password or a biometric to unlock after a short period of inactivity.

You'll see the same pattern in web applications. I haven't used the web version of Slack in a while, but I remember that it didn't force me to login that often.

I think one of the reasons why many choose to have a short session expiration time is that they either can't or don't dare to do a proper risk assessment and just does what their competition seems to do.

There may be technical reasons too. Web applications do have a higher risk of accidentally leaking passwords through low security and hostile actors. Reducing the TTL may not fix the issue, but it will at least reduce the window where an attacker can make use of a stolen token. It may not matter much if you have a dedicated attacker, but it at least adds some resistance.

Creating a secure application takes time (acceptance from business) and experience (junior developers) and many don't have either.


I think it's just that users are more likely to sign in to a website on a shared computer than they are to download, install, and sign into an app on a shared computer.


Often it’s because your JWT hasn’t expired, your IP matches previous, and so you’re authentic. Local storage is local storage whether it’s electron or browser.

If your JWT was close to expiry and your IP address is different and your user agent or headers are different then it may force to re-authenticate.


Web pages share browser with other web pages, desktop apps don't.


There are two use cases here which are different.

Inactivity session termination mostly just protects against someone using your device if you leave it temporarily unattended.

Absolute session termination is a different control, which protects against compromise of the session token. So someone who hacks the device and gets the session token can only use it for a period, after which they would need your credentials to get another one. This is usually longer than inactivity termination.


In real world, security is done by a bunch of nearly clueless people hoping to put enough hard enough roadblocks to make it practically not worthwhile for you to hack in.

If you don't have mathematically secure solutions, session expiration and similar are the imperfect tools you use to solve your problem.

One beef I have with short session expiration, though, is that in many cases it causes people to degrade security. This happens when people get annoyed by having to frequently provide their credentials and rather than enter credentials securely, use even more insecure workarounds.


> One beef I have with short session expiration, though, is that in many cases it causes people to degrade security.

One counterpoint—if sessions last too long, then I forget the password, and reset the password. If sessions are too short, then I choose an insecure password. Neither option is great for security!

Talking about systems for which a password manager doesn’t make sense, like your lock screen password.


If you're remembering your passwords then you are probably using insecure passwords or are re-using passwords across multiple services.


What you don't like in the first option? If you need password reset feature, then it's necessary. If you don't need it, then remove it.


This is why I prefer short sessions and requiring frequent, passwordless, biometric authentication. Still relatively low friction for the user, and no password to remember or forget, while still reasonably high friction for an attacker.


@sama

Jk though I intuitively agree, I think this article doesn't really prove its claim.

> Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all, no matter how short the session expiry time is.

I would argue shared computers are one of the primary threats deterred by this.

Additionally, for the attacker has access to device section, the author points out an attacker could still access email. But, email services tend to be more developed and often have the ability to expire all tokens (or logout all browsers). The exact place short session expiration helps is when the service does not provide a feature to log out all sessions remotely. That would be a better deterrent by far, but may be more complex to build than a default timeout.


They way I like to handle session tokens that seems like a good compromise between security and user friendliness: Automatically renew the token as long as the user is actively using the website. I usually use a 12 hour token and renew if it's 8 hours old or more. The renewal requires the use a second token stored locally in the browser so even if you somehow sniff the token off the wire, you still can't renew it without control of the user's browser.

And frankly if they have control of the user's browser, they could probably install a key logger anyway.

That way, if someone is constantly using the site, they would never notice a logout, but otherwise they might have to login once a day.


Do you sign a request with that reauth token or do you send the reauth token? Because if they’re getting mitm then all is lost. How long does the reauth token last for? Is a new one issued when you reauth?

As negative as I sound, the system is better than just allowing the regular api token to extend the session. Of the hundreds to thousands of api requests a regular user may make a day, only 1 is the reauth, which is nice.

Security is an onion. Every layer is a bit sour and eventually you’ll cry.


> Do you sign a request with that reauth token or do you send the reauth token?

Sign

> Because if they’re getting mitm then all is lost.

This is true regardless of your authentication scheme, which is why https is important and then you have to hope they didn't just click through a warning and ignore it or have their trust store compromised or....

> How long does the reauth token last for?

Until it is reissued. But since the auth token is only good for 12 hours, effectively 12 hours max, since it can't be used for reauth without a valid auth token.

> Is a new one issued when you reauth?

Yes.


With mitm everything is lost. I could just wait until the user gets a reauth token again, assuming the reauth token expires (which it always should). Netflix had an issue a couple years back where sessions would never expire and you could use Netflix for years with the same session cookie.


Some big auth providers throw in a few extra signals too. Your IP address changes? That might hurry up expiration a little. Your user agent changes? Big red flag. Etc.


Another pet peeve is getting logged out just because chrome or FF upgraded in background...


This is total bullshit. Stolen sessions are being actively sold on Darknet in bulk. If nothing else the best thing that can happen is that they have expired before someone have the time to exploit them.


Totally agreed. OP has no idea the scale of account takeover attacks by stealing tokens for any large online service (Google accounts, iCloud accounts, Microsoft accounts, etc).


Good idea. Let’s set it to 0 minutes. If they’re not in an active mouse click rhythm just nigh of Liszt, log em out! is what I always say.


> Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all

Shared computers at the library are still the only way some people can access the internet. For others it's the only way they can navigate sites that aren't mobile friendly (gov't sites, etc)


Someone please tell Apple, because they're the worst at expiring sessions. This (among other reasons) makes the developer experience terrible.


On what services? I’m not an iOS dev, but I just haven’t been hit with this on apple services as a user.


I have to log in again pretty much daily on icloud.com even though I constantly click the "trust this device" checkbox.


App Store Connect, Feedback Assistant, developer.apple.com, etc.


On all their websites. Microsoft is quite a bad offender of this also.


Microsoft is the worst. Endless auth redirects, and gates help you if you've ever had multiple microsoft accounts on the same device.


Dunno, skype asked my password a long time ago.


IME github session is almost eternal.


It's possible to connect repositories / organizations to an external SSO. At my work place I have to re-authenticate several times a day. Luckily it usually doesn't require me to login at my work account each time and just redirects back automatically, but it adds an extra layer of protection. If someone gained access to my Github session either physically or externally it would not be enough to access my work.

I have these layers of protection between the web page and the actual code.

1. Github password

2. Github MFA

3. SSO password

4. SSO MFA

If an attacker wants to be successful, they probably need to actually use my machine as I suspect that both Github and the auth solution at work checks ip-addresses and other things that they would also need to get hold of. In that case they probably have complete control of my machine anyway and can do what they want.


More Microsoft-branded then - GH's current session handling largely predates Microsoft


They also expire cookies in Safari like clockwork, forcing you to login again to almost everything.

Very annoying. Supposedly it prevents tracking, but it's just annoying.


Apple's web properties are so, so bad. They clearly don't invest in anything that isn't a native iOS / MacOS app.


They don't really invest in native Mac apps now either.


MSFT and SFDC are also notoriously bad.


My banking site has an extremely short timeout, like 5 minutes but feels like less, which is really burdensome when you're looking at stock data while trying to read news about it.

So I had to create a Tapermonkey script which reloads only a specific page of the bank every 30 seconds and I move that tab somewhere where it doesn't disturb me.

How is any of this improving security?


As someone that regularly works with malware, cybercrime and SaaS attacks I strongly disagree with this article. One look at Genesis Market is enough to see why "better auth", "2FA", etc, do not solve the problem of malware on the machine stealing cookies (sessions).


"That being said, I don't know if short session expiration is the best solution in 2023. As the article points out, major corporations like Google don't use short sessions, even though their services are used for a variety of sensitive things and they're huge targets."

Is it possible that Google might prefer that users stay "logged in" because this allows Google to conduct more data collection, for purposes of supporting its online advertising services business. Perhaps staying logged in is "more secure" for the ad targets that use Google than logging out. Maybe its both, but not in equal measure. Which one would be more important to Google and its shareholders. Would a company like Google make tradeoffs that favour its bottom line. Are there any differences between the core business of Google (advertising) versus the core business of financial institutions. For example, Google's customers are advertisers. The folks that have Google "accounts" for non-advertising purposes, i.e., probably the majority of folks in the world who have Google accounts, are not customers.


Disagree completely. People walking away from public computer terminals, e.g. libraries, are greatly helped by shorter sessions, even if they're imperfect.


This is solved by the "someone else's computer" checkbox in the login form that makes sessions expire. 99% of the time, though, people are using personal devices. 99% of people should not suffer because of the forgetful fraction of the 1% who use shared computers.


Crowd near terminals helps there. If a thief wanted to do something shady he will shoulder surfed by many eyes and breathed in the neck.


This one seems prone to bias.

Out of all the people disagreeing in the comments who actually takes a position against self interest?

1) Security is more your focus than UX, but you agree with the article.

2) UX is more your focus than security, but you disagree with the article.


Security is more my focus than UX (although I am not myself a "security professional"), and I neither agree nor disagree with the thesis of the article. I think the right answer is the classic "it depends". Specifically, it depends on how bad it would be if a session token is leaked, how the rest of the authentication system works (can session tokens be revoked, what is required to refresh a token, how long is the absolute expiration of a token, etc.), and even whether the user is using a public computer or not.

That said, the arguments in this article makes many dangerous assumptions. Such as assuming that users never use a shared computer (like say the computer in a library), and that you can completely reliably avoid tokens from ever accidentally getting logged.


The mistake is in thinking security is a separate domain from UX. While expertise in it may be, it applies everywhere.

To draw a comparison here, should we assume that the author advocates for phones or desktops which do not automatically lock after a set period? It's a similar threat vector (unattended phone/pc) in the physical world. I'd assume no, because the specifics of that threat model are different.

This is true of other software as well.

UX and security are only at odds in the grey areas. I'd wager most people posting here -- regardless of affiliation -- would be upset if their web banking solution didn't expire their sessions for days.


Agree they’re not at odds. I’m not trying to say they’re adversarial at all.

Rather, it’s that I suspect some comparisons are less likely to get objective thinking out of us than others.


What's security and what's security theater?

Is XSS a fact of life that needs to be protected against or a fundamental flaw of the protocols we currently use that could be fixed at the root if it didn't make web development more expensive/inconvenient?


Short timeout is neither security nor UX, it's security theater to cover the owner's ass when something happens.


Also, not respecting your own "Remember me" checkboxes does not make happy customers.


My pet theory on this is developers never test it and QA rarely do either. Both groups are logging in/out constantly and if that checkbox doesn't work it's easy to fob off as "maybe I didn't check it?" or "my account switching broke it but that's a special case".

Every time I have to login again to a website that I told to remember me or "trust this computer" I get frustrated and then immediately think "Ehh, I understand how this happens". In a time when a password manager is nearly a requirement it's not the end of the world but it still bugs me since I really don't think most of my sessions have any need to expire unless I explicitly log out (or void all my remote sessions).


You'd think simple log in/log out test would be trivial to automate...


This confusing wording has always bothered me. Should "Remember me" keep me signed in, or just remember my username the next time I need to sign in?


The main reason you would want short user sessions is if you can't be sure whether the end user device is adequately secured.

For example, if you're building a banking application, there's a huge monetary risk in allowing someone to potentially leave an open session on an unlocked shared computer.

Is this something that we could resolve with browser standards, for example the browser being able to provide some kind of a hint about how well secured the end user terminal is? Not cryptographic remote attestation or anything, just something simple like a header or API that would return some basic information such as whether the user has a screen lock with password enabled and whether the computer is a shared device or kiosk.

On Android, there is the KeyguardManager.isDeviceSecure method which provides this sort of functionality.


The threat model is extremely different for different types of websites. Banks often look at a variety of shit, ip country and often buy off the shelf systems hardened around faking tools.

I don't care what Most websites for security (to some degree) except my bank and my Gmail.


I'm going to go with that old chestnut: It Depends. The author gives a variety of scenarios where the session could be stolen, but doesn't really discuss the context and threat models around them. There is no one-size-fits-all. Shorter expiration times might be the right choice in some scenarios, and useless in others.

Also, yes, shared computers without user separation are still a thing. Not all libraries have the budget or technical expertise to secure shared computers as well as possible. Also, there's public WiFi, something I'm betting the author uses all the time.


If an attacker steals a session cookie with XSS or session fixation, the attacker immediately gains access to a valid session and can keep performing requests to keep the session alive. An absolute timeout would limit the amount of time the attacker has, but realistically this wouldn’t really hinder any attacker.

It is a lot harder to obtain valid short term tokens than it is to obtain long lived tokens. The insecurity of long term tokens is demonstrated by the many Elon Musk crypto scams found on Youtube.

I do agree that it is always a question of risk/reward. But to argue against it by saying short term tokens don't provide any added security, because a hacker can still gain access to a valid short lived token is disingenuous.


This. It has to do with the window of time that the attacker could access the session. If a service is meant to be used, says, once every 3 months, setting the session expiry to 7 days would make the usable attack window 7/90 = 7.8%. However, if the expiry is infinite, the hacker could just access the session any time.

It's also much easier to obtain an active session token than to configure a tool that obtains the token of the future logins.

Perhaps it doesn't matter that much if the service is meant to be logged in all the times tho.


Is there a list of "security advice that doesn't really make sense but we keep following just because"? This is a great one, another good one is regularly changing passwords. What else?


NIST dropped the password change recommendation a while back [1] but it still lingers on. The staying power and long tail of this deprecated advice is unfortunate, to say the least.

I don't personally agree that short sessions is bad advice, but Phil Venables has an article that you might enjoy, "Ceremonial Security and Cargo Cults" [2]

[1] https://pages.nist.gov/800-63-FAQ/#q-b05

[2] https://www.philvenables.com/post/ceremonial-security-and-ca...


My experience with security auditors from big firms is that they have a checklist including recommendations like 90-day password changes, composition rules, and so on, and will probably never get rid of those.

You may be able to explain to the assessor that "we don't force password changes because NIST no longer recommends it", and they may be sympathetic, but they are still ultimately going to deliver a report that you got dinged on two items because you answered those parts of their questionnaire "wrong".

I have had issues raised for a site having a robots.txt file. NOT that there was a sensitive URL listed in the robots.txt file, or that we were using it to try to hide stuff that wasn't locked behind authentication. Just that we had one at all.

It ends up being way easier to just get rid of it and comply, than try to explain to multiple people at different levels of management how robots.txt works and how it could be associated with vulnerabilities due to misguided usage while also having NOTHING to do with security when used properly.


Reminds me of the anti virus software at work many years ago that did not allow me to download a password encoding library, because the filename contained the word "password"

I've also experienced automatic security reports that complain that the configuration file contains the word "password" (as in "database.password="). I had to argue with them that we did not actually store passwords in Git as they could clearly see, but that it was set using a environment variable by a secrets manager when actually running in a container. Next time we had a similar use case we would just give it a different name to avoid this complication


> another good one is regularly changing passwords

I believe someone stuck forced password changes in legal banking regulations at least in the EU. In spite of all having hardware or mobile based tokens.

Needless to say, I just increment a number 10 times, because they "prevent password reuse" as well.


I think so too. Thankfully most of them enable face/touch ID login so logging in is pretty seamless.


My favorite example of putting security ahead of usability:

"Unexpected item in bagging area. Please remove item from bagging area." "Item removed from bagging area. Please wait: an attendant will be with you shortly."

This one sets me off because I worked in the self-checkout division.

The simplest way for stores to improve customer satisfaction and increase usage of self-checkout was to disable the bagging scale and accept that there would always be a certain level of "shrink," or grocery item theft.


France has a workable system - you do all interactions with the machine - it doesn't weigh your items - but a clerk oversees you (along with another 3 ppl) and checks your receipt against your bags at the end.

Like the Costco receipt checker, it's very unlikely this person will flag anything unless you're suspicious as hell. It's like those "eyes stickers" in Asia - to keep honest people honest.


It depends on the system, but I've seen the weight system work consistently well in one store chain and fail on every other item in another store chain. Perhaps there's a leniency configuration, and some stores set it to "ultra, ultra paranoid" mode?


From the article:

Facebook, Google, Amazon and GitHub have sessions that never expire.

In the case of Google, GitHub and Amazon, I routinely have to re-auth. I'm not sure that this is accurate.


I agree with the author but I would add "... but long session expirarion harms security".

In the real world, if an infostealer is stealing your session cookies, it doesn't have to be a one time thing,it can send your cookies back on an interval.

But speaking of, ssh private keys and user auth certs having an overly long lifetime is as much of an issue. SSH is the worst offender but for the unaware, you can enroll a cert for a windows account and even if the password is rotated regularly (like your super secure hashicorp/cyberark/tychotic rotated service account) that one cert can be used to get kerberos tickets and login/auth just fine, the cert lifetime is usually months if not years. Smarter folks use a CA with SSH but the thing with CAs is you must use ocsp responder for short lived certs because a CRL isn't always feasible, maintaining ocsp for this type of frequent use at scale is no simple matter either.

Back to web sessions though, in the cloud and on azuread/o365 stuff (like onedrive or teams), the session token does get renewed every few hours but without any user interaction because the device is trusted. So all a threat actor needs is one sesssion/token and the ability to proxy traffic through the victim host. Matter of fact, my secret yet controversial opinion is that SSO should never be implemented without 2FA period. No exceptions! The cred you manage centrally gets also pwned centrally. A lot of crap like device auth and conditional access policies make SSO very dangerous, if I had a company I would mandate yubikeys and password managers with app specific creds by default and SSO with yubikey where possible. With 1 day expiration as a default.


> If an attacker steals a session cookie with XSS or session fixation, the attacker immediately gains access to a valid session and can keep performing requests to keep the session alive.

Objectively false? If they get the refresh token sure, but even that has an expiration and isn't refreshable in many common implementations.

If they only get the access token, then this is completely false; you can't refresh a session using only the access token.


> Facebook, Google, Amazon and GitHub have sessions that never expire. They think it’s an acceptable risk. I think they are right.

Three of these companies have a strong vested interest in keeping you logged in all the time and to minimize the friction of interacting with their services.

GitHub, if you use 2FA or corporate SSO, does expire sessions. Perhaps they make the tradeoff of risk vs convenience for username/password users, as is their right.


The proper way to handle the problem is to tie session IDs to more than just a cookie or other fixed value; so that if they get exfiltrated they cannot be used without reauthenticating.

For example, the Session ID becomes "IP address + session token + browser ID" or something, then if I get your token, and even if I can mimic your browser fingerprint, I'd still have to be coming from your IP.


Careful about doing this. Laptop and phones change networks. If my wifi is weak you'll see my phone switch to data intermittently. Forcing me to re-auth every switch would be painful. Some VPN providers also change IP addresses regularly (I've seen <1 minute between switches).


Also people in countries where the government censors the internet often use VPNs to bypass that censorship, so their IP address might change many times a day, including jumping to a different country. IP addresses should really not be used for anything beyond packet routing.


You can adjust it to accommodate this, by allowing more than one single combo to be "legit" but it does add extra friction.

However, when combined with other things it can at least let "my browser, at my home on my fixed IP" not have to login every ten minutes.

Other systems handle this by only doing the "session detailed check" when you perform privileged activities.


This is going to fail and break sessions for users way way more often than it's going to stop a bad guy


The point of short sessions last I saw is not necessarily for technical reasons but for helping with legal reasons. That is, it means to lawyers that an authorized user could not have performed actions outside of a certain time window. This makes the discovery period for legal proceedings substantially faster.


I use one website semi regularly that has what feels like a 5 minute session expiration, but is actually probably 10 minutes. The session expires even if you are actively using it. Drives me nuts. But not nuts enough to write in and complain. Maybe I should send them this article ;-)


Let me guess... is it a bank? One of mine's does it (expires session after a few minutes, even actively using it).

I find that bank websites usually follow completely opposite ways regarding to security than any other industry: very short passwords, only digits, not encrypted in db (some sites ask for single positions), mandatory password input by virtual keyboard, cannot use password managers, not using 2FA or only allowing SMS instead of TOTP or other methods, etc...


Re single positions, my bank requires a password (which one hopes is appropriately hashed and so forth) plus a challenge to supply requested offsets into a secret phrase. That mitigates I suppose against over the shoulder attack. And they have fewer requirements of the phrase so maybe it’s less likely to be written down?

I’m curious whether there’s a technical means to allow the challenge by offset without storing the phrase plain text. I’m challenged for four positions - in theory it could store hashes of all possible four offsets, but is there a better way?


Split the secret into characters then compute verifier bytes for each character:

verifier[i]=hash(pepper,salt,i,secret[i])[0]


Is it... American Express? It's the only site that will log me out _while trying to pay my bill_. And forget trying to use American Express Travel.

I have written in to complain, but have only gotten "It's for security reasons" hand waving in reply.


"Session tokens are pretty secure. The threats described above are easily fixed with other measures, such as disk encryption, locking your computer, or HttpOnly cookies."

Or simply log out if a token is stolen.^1 After logging out, the session token is expired. In some cases one can log out non-interactively with a single HTTP request. No need to visit a page in a Javascript-enabled graphical browser and navigate interfaces.

One can use a forward proxy to insert tokens for selected sites. This allows one to scrub/disable/clear cookies globally but stil remain "logged in" for certain sites.

1. The problem is the user might be unaware that a token has been stolen, at least until it is too late.


ANY session expiration does not help security. Authentication should not involve time. If I logged in, I should stay logged in until I explicitly log out. That's it. The session should live forever or at least several years from the last time I used it.

Speaking of internet cafes and other shared computers, they usually are secure enough. All internet cafes I've ever been to run specialized management software on their computers and everything gets wiped the moment your paid-for time expires. In places like universities and schools, people usually have their own domain accounts that reside somewhere on a central server, including their home directory that stores all the cookies and tokens.


This is an insane take; if your session token is buried somewhere in a log and gets discovered after a few hours, it shouldn't even be remotely possible that it's still valid.


A few HOURS? That's an insane take.

What kind of situation makes that possible even?


Hacker gets in, exfiltrates your log files that mistakenly have session tokens in them, goes through over the course of an hour or two before stumbling upon an admin session token that's still valid apparently.

Could easily write a script to find the session tokens and check them for validity. Many have their TTLs directly embedded, no need to touch a server to check, at least for that.


If a hacker is able to exfiltrate log files, what's to prevent them from exfiltrating other things? If they get into a server, they could as well dump sensitive data form the database or do something to gain persistence, like installing a shell or adding their SSH key. If they get into a client device, they could steal more, more valuable data than just some logs — browser cookies, email credentials, app tokens, owner's personal information, all that stuff.


Have to disagree. Lots of times it is only opportunity to get logs from some misconfigured instance of elastic or some other log aggregation system or read only somewhere on web server. Without possibility to own the whole system.


Permissions; if logs aren’t properly locked down that doesn’t mean everything is available.


In what kind of situation would logs be the only thing a malicious actor could access? If they exploited the application that writes them, then they have as much access as the application has — and if its job includes creating access tokens, it definitely has access to the database that stores user accounts.


Any one of a million web apps/frameworks have had file traversal bugs where you could read /var/log/rails.log but not /var/db/secrets.db. In many cases you also get app.env or whatever, and it's game over anyway, but it's not automatic. Or the logs land in an unsecured S3 bucket.


Happens all the time, where access is through a low privilege user, or the nature of the infiltration isn't just a user but through some specific tool that only itself has access to specific things. Maybe the hack gets you into Splunk and that's it, and only for a few hours because the session you hijacked expires...

Nobody exposes access tokens on purpose, there is no "its job includes creating access tokens". No system's job results in this happening as part of normal operation.

Also 100% wrong re: "it definitely has access to the database that stores user accounts". That statement makes me think you have no clue what you're talking about; you don't need access to users at all to validate a session token. In a JWT, the token is signed and you validate the signature with a cert, and that cert is public knowledge.

I've written auth services that have zero access to password data, which validate session tokens.


> In a JWT, the token is signed and you validate the signature with a cert, and that cert is public knowledge.

JWT is a particular type of token. Not everyone uses JWTs because they have drawbacks — in particular, a self-contained signed token can't be revoked without maintaining some sort of blacklist of invalidated tokens. So you have to resort to expiration to make up for that I guess?

In my own backend projects I use session IDs that are simply long random strings that are stored in the database. I delete the row when the user logs out of their session.


A hacker would start salivating if they heard someone who built a product storing PII said what you just did about session tokens for your personal projects.


And? I assume that my users are smart.

By the way, social media services like Facebook do store PII and have non-expiring sessions. And somehow, if someone's account is hacked, it's that person's problem, not Facebook's.


...it's 100% Facebook's problem.

I sometimes forget there is no barrier to entry on this website, thank you for reminding me.


There’s a ton of these pseudo-security measures bestowed upon us by various outdated security benchmarks, which we have to follow or risk losing our certification or regulatory permissions, even though said measures in most cases reduce security.

I’m looking at you

- Sudoless single-user machine users

- VPNs

- AVs on servers

- Disabled ports

- Disabled ota protocols

- etc


Short aside about session lengths:

If you are an app like Chase or Discover, and you intend to kill my app's biometric auth session after say 90 days, please tell me that when I first login manually and at least 7 days before said session will end.

I keep my passwords in local password safes, and I manually sync a copy to my phone.

It should not be expected by the user to have access to their passwords every day, if they have apps which say "Oh yeah, fingerprints! We trust you!".

If I were in the middle of a trip and got logged out of my mobile app without my password safe, I would be screwed, and it would not be my fault, given how opaque the session policy is on these apps.


And yet ITP destroys all cookies after 7 days, even first-party cookies, unless your site never uses any other third-party iframes or hosts servers in multiple data centers.

But they do it to prevent websites tracking their own visitors without their consent for more than 7 days… I think.

So session expire not just to spite you.

But the question is — given that the cache is totally cleared, after 7 days, can iOS safari ever guarantee that a specific URL will NEVER be replaced by a server to something else? With non-expiring caching, we could! But now?


Judging by response quantity and low quality (Some are really defensive), this has really struck a nerve here.

15 min logout on a bank site isn’t even enough time to read a full statement pdf.


We use short sessions to protect patient information from unauthorized access in systems that we build. Yes, you shouldn't leave your machine unprotected but real humans do. Sometimes, you may trust someone with access to your computer but they shouldn't necessarily access certain protected systems. Especially in HIPAA-land, short sessions are a really effective way to make systems usable while still limiting how much damage can be done even accidentally.


What’s annoying and completely unnecessary is aggressively short session times on resources that are already behind a VPN and multiple layers of security, such as an RDP session, making you constantly re-authenticate throughout the day if you happen to turn your attention away from it for a few minutes. In fact, having to re-authenticate for multiple services throughout my workday day really adds up.


Short session lengths are often demanded as a thing to do (or provide as an option users can opt into) to remediate “issues” found from security audits.

It’s an easy enough box to check and so it’s often checked.

That’s why you’ll see them banks and other regulated environments, or on apps that provide soc2 or otherwise have been through security audits (google makes you do an audit of you use certain APIs).


"short session timeouts, such as after 15 minutes of inactivity"

I wish my bank, credit card, brokerage websites gave me 15 minutes of inactivity. It seems most have set it at about 5 minutes. My finances are not trivial so I am often logged out when paying bills, figuring out my budget, planning a big purchase, etc. I don't think there is any way to set it to my preferred value.


It seems like a shorter session expiration time could help with analysis because you don’t have to worry about bugs that were fixed long ago. The longer the expiration time is, the farther you have to go back in history when thinking about how an old bug might have been exploited.

(Similarly for key rotations.)

This might affect how long you want to retain logs?


“ threats described above are easily fixed” This is not correct. Non tech users do not understand any of this imho.


> You forgot to lock your computer when you went to lunch, and the attacker sat down at your desk and gained access to your machine.

> In this case, session expiration may prevent them from gaining access to your session, if they weren’t fast enough. However, they now have access to your email, ...

Only because GMail sessions never expire now.


Can someone associated with PayPal being that article to some decision maker’s attention, please? I believe their session timeout is fixed to 5min of inactivity, which you easily reach when doing some accounting tasks with PayPal purchases. Renders PayPal to the most annoying web app I frequently need to use…


The worst offender I’ve seen for this is OnShape. I totally get them suspending the session for GPU usage reasons but there’s no reason I need to reauthenticate 80 times a day when taking 15 minute breaks away from the computer to go look at a machine.


Very tangentially related, but people keep trying to convince me that cookies are somehow safer than JWTs. Is there any truth to this whatsoever?

Keep in mind that you can have a "sessions" with JWTs too, though it entirely defeats the point.


It helps security tremendously (against local device malware) if you use hardware 2FA, which is now widely supported and is used by the security conscious.

So does sandboxing and file system access permissions.


Yeah, mostly true. But were you figuring that the short expirations were for actual security, or for theatrical security? They can be extremely useful for the latter.


Browsers need to provide a way for sites to query if the device has a pass code or not. The lost device threat is a top reason for short session cookies.


If you control the device you can spoof whether it requires a passcode or not. Unless you go down the user-hostile rabbit hole of device attestation.


If expiring sessions are not important, we can go back to using http authentication on the web and not have to build our own. Yay!

Oh, we still need 2-factor auth? Darn.


short sessions exist to enable token revocation that does not require a central sync point (aside from time servers)


Phew, now I will bump session time to a week, since shorter ones were bothering me a lot. Thank you! :D


A week? Here i was thinking 8 hours would be a luxury.


I fought a lot at work until I got our sessions times extended into 2 hours (they were 20 minutes).


Why not a year? Or forever?


If the account is for accessing employer's system then sessions have to be kept short, and users have to re-login every day. Otherwise employees who have left the company would continue to access the system.

The reason Google never expires your session is because they want to track your activity and connect your activity to your account. This is not a good system to copy.


This seems like a non issue... Can't you just invalidate existing sessions/tokens immediately when an account is suspended?


If you're using your own password-based auth then you can. But that has its own issues (such as user has to remember to delete account on your system). If you're using single-sign-on then you have a token from an identity provider such as Microsoft or Google, and then you don't get immediate notification that the user account has been suspended.


Surely you can revoke a session before it expires?


No you can't because you don't immediately know that the user's account has been suspended (assuming you're using an identity provider).


It does when access is on public computers, such as in a library or similar shared terminal.


Authentication should be in every request, using TLS client certificates.


It would be interesting to see a test of this idea with some sort of honeypot.


It definitely helps with "friendly fraud"


somebody tell that to Interactive Brokers man. at least the session should remain active for days trading hours.


Has anyone sent this to openai?


, and


> Current security advice is to use quite short session timeouts, such as after 15 minutes of inactivity.

I do not remember a web application with 15 min timeout.

Some banking apps have short timeout, even shorter than 15 mins. But normal apps usualy had 30 mins, I think. Many simply relied on the browser session cookie - on desktop that meant until you close the browser.

15 mins would be inconvenient.

Aslo, most web apps with limited session duration had the Remember Me checkbox, that extended the session cookie duration to about two weeks.

So, the user experience costs are not so big, as the article presents.

> However, most mobile apps and big web applications such as Gmail or GitHub don’t adhere to this. You can be logged in seemingly forever without authenticating again. Are these insecure? Do Google and Microsoft know better than NIST and OWASP?

I can not compare to NIST and OWASP, but I do not see Google and Microsoft as ideals everyone should blindly follow.

I suspect, these companies started to use endless sessions - a security degradation imho - after they adopted two factor authentication. Authentication process had become too annoying for users, and corpanies introduced endless sessions - zero factor auth.

Similar as an often password rotation was recommended as a best practioe for years, and then it was realized as making more harm than good, because users are annoyed and take shortcots. The two factor auth lead (I suspect) to zero factor auth.

A similar claim in the article, about short sessions, is not supported by evidence:

> However, short sessions have disadvantages, both in user experience and in security. If someone needs to log in again every 15 minutes, they are going to make authenticating as easy as possible. That means keeping the password vault open, choosing an easier password, or putting the password on the clipboard every time. Reauthentication comes with its own risks. A shorter expiration time does not automatically reduce the overall risk.

Thats not how it worked for me. Sometimes I maybe used the Remember Me checkox, but often I prefer to type password manually - I like to be prompted for password from time to time, to refresh it in the memory. Some people use passwrod managers

As a user, I am OK with Hacker News having endless session. But not email.

Ideally, I would like each authentication factor to have a separate cookie / token, with its own user-controllable expiration time. So instead of an old Remember Me checkbox to have two Epire In duration inputs. And instead of the Sign Out button, to have the two tokens represented separately, with the expiration interval and remaining time, which I can change / termintate.


Neither does “Your password cannot match your previous 1,000 passwords”. Let’s just keep everyone’s password and previous password around in a database shall we?


You don't need to keep someone's plaintext password in the database to compare. Just re-calculate hashes against historical hashes at new password time. If the hashing process is secure keeping a few old hashes isn't really a security concern, especially if people are actually using unique passwords.

Now, normally this is tied to an automatic password rotation policy, because allowing a previous password completely undermines a rotation policy. The real thing to complain about and attack are these rotation policies, which are probably more harmful than helpful.


You should never keep plain text passwords. bcrypt it at the very least. I'm referring to keeping a history of those hashes around to compare against other leaked databases one would find on the dark web. Having a history of the last password hashes are n+1 as bad as storing a password hash.


I have seen sites that complain if your password is too similar to a previous password, and that shouldn't be possible with hashes. So who knows of the sites that look for exact matches are looking at the hashes or not.


Systems like that probably do have some form of plain text password history which is very wrong. They might also do some basic common transforms of the password (Increment the numbers, did the hash match? Decrement the numbers, did the password match? Swap case around, did the password match?) but I do agree that's probably a pretty big stretch versus taking the easier (and worse) way to make that anti-feature.

I'm not saying that everyone does it right, I'm just saying having a password history enforcement does not require them to keep plaintext passwords. You don't need to "keep everyone’s password and previous password around in a database" in order to have some form of password history enforcement.


correct. security often compromises itself for sake of itself. It's stupid and is the "lowest denominator" thinking. If they store a history of hashes, one could compare them but not classify similarity. Only whether it was used previously or not. I get down voted by the security folks who think this practice is a good idea. It's not. It's a horrible idea. So is enforcing 30m session/cookie timeouts. Or constantly scanning your machine for "threats". Pegging your CPU at 100%, 100% of the time. Security folks are like the mall cops of IT. Not saying that's everybody. There's some terminators out there, or Jack Ryans, but most are Paul Blart. They want to read your emails, see your browsing history, read your chats, get offended if you criticize them, bring you into HR if you circumvent their crappy jump box backdoor into AWS...


That usually works by storing the password hash, not the password itself.


yeah, definitely not advocating keeping plain text passwords, ever. But previous history of hashes is bad too if the database is leaked.


It's easy. Make your password CorrectHorseBatteryStaple000. Increment until 999. Rotate.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: