Used jwt for our web app before - changed to session cookies. To make jwt secure for web apps it feels like you are reinventing session authentication.
JWT is the authentication equivalent of blockchain. Besides a handful of things, people try to apply blockchain to problems that aren't real. JWT is using cryptography to solve a problem that doesn't exist, which is that storing and retrieving user sessions on the server is burdensome, even though it's one of the least expensive computations you can make on a server. In fact, unlike blockchain, I can't think of any real world scenarios where a JWT is superior to cookie auth over HTTPS. It just sounds really awesome until you realize that, no matter what, you're going to need to keep and fetch some information about the user on the server, and that it's really not helpful at all to offload user data into a client side token. As someone who used to be enthusiastic about JWT, I kind of wish it would die in a fire. Cookie auth is much easier to implement without the need for a 3rd party library and is also more secure.
> I can't think of any real world scenarios where a JWT is superior to cookie auth over HTTPS.
What if you aren't using a browser? What if you explicitly want to know that the contents of the auth token aren't modified? What if you want to work with a IdP that only provides JWTs to offer centralized authentication and user management?
> Cookie auth is much easier to implement without the need for a 3rd party library and is also more secure.
As long as you are in the browser, sure! In many cases you can use secure cookies and server side sessions and they work well. But as soon as you get into more distributed or larger systems, reaching out to multiple APIs, JWTs may make sense.
There's no difference between browser authentication via a cookie and app authentication via a token. The cookie stores the session token, the app store the session token.
What if you explicitly want to know that the contents of the auth token aren't modified
There is no content in an auth token — it's a random number.
What if you want to work with a IdP that only provides JWTs to offer centralized authentication and user management?
>> solve a problem that doesn't exist, which is that storing and retrieving user sessions on the server is burdensome, even though it's one of the least expensive computations you can make on a server.
I've never heard anyone say that's the problem, rather the issue is the network latency + dependency of hitting a cache/database for every future request. It's not a problem for everyone's apps, but pretending it's never a problem for anyone is shortsighted. Session cookies are just fine for many/most apps, and JWTs add new problems, so I dont recommend them. YMMV
Cookies also have domain restrictions. The client won't send a cookie from domain foo.bar.com to baz.bar.com, unless you set your cookie domain to *.bar.com, which may be bad for other reasons. Even if you get you cookie domains right, there has to be a single source of session truth across the system. This gets to be complex in distributed systems.
On the other hand, a client can usefully send a JWT to any other service that has the secret key to validate it, and once validated, the information the service needs can be in the JWT itself, so there's not session lookup needed.
It's the difference between giving a friend your parking stub so they can go pick up your keys from the attendant and bring your car back vs giving the friend a copy of your key and having him get the car from wherever it might be.
> there has to be a single source of session truth across the system. This gets to be complex in distributed systems.
This sounds like a database. Or if that's too slow for you, a Redis cache.
I try not to make blanket statements about software infrastructure, because we all work on a variety of systems. However, I'm having a hard time thinking of scenarios where session cookies are too hard or too slow.
JWT is just a format for passing signed authentication/authorization data between two systems, possibly through untrusted channels (e.g. web browsers, mobile apps).
If the receiver trusts the signer, they can use the data. Sometimes the two systems cannot share a database, or a cookie.
Like everything else, JWT is sometimes used inappropriately.
And of course it's not the only way to pass signed data between two systems. Sometimes it makes sense to create your own equivalent that supports only the pieces you need.
One popular criticism (and historical security risk) of JWT is that the spec allows signing algorithm flexibility. This is easily mitigated, but of course any tool that can be used improperly, will be, by someone.
But honestly, JWT is straightforward, simple, and has good library support across all popular languages. It's a good choice for an API that you expose to groups outside your organization.
The problem lay in 1. implementations blindly accepting the value in the alg field as sent by the client and 2. one of the allowed "algorithms" was 'none'. An attacker could create a JWT with the desired data and "alg" : "none" and the flawed implementations would then "validate" the data by applying "none" to verify the signature was "".
Oh, and the spec required conforming implementations to accept "none".
Well, "alg: none" was mandatory to implement to meet the specification, but was not mandatory to accept in use.
And most implementations did not include "none" in the default accepted algorithm list.
But yeah, the optics of that debacle were poor. And I can sympathize with those who instinctively avoid JWT due to concern that there are other design/spec flaws lurking.
(JWT is so simple though -- it's what you'd design if you designed a thing for signed, base64'ed messages and then added a couple of extra fields. Some people will respond: "So then do that instead!". :)
IMO, as long as you think of JWT as a simple and standardized signed message format, with broad cross-platform support, it's perfectly suited for its purpose.
Hm, that's a surprising take. I think JWT is super simple.
One header chunk, one data chunk, one signature chunk. Base64-encoded and concatenated. Done.
You don't need a JWT library, although I guess I'm taking JSON parsing ability as a given. This seems reasonable -- anywhere JWT would make sense, JSON is likely established.
Some of the other associated standards (JOSE, JWKS, etc) can get complicated, but JWT is about as simple as it gets.
> I can't think of any real world scenarios where a JWT is superior to cookie auth over HTTPS.
I used it for cross-server auth sign-in to B.com from A.com
> It just sounds really awesome until you realize that, no matter what, you're going to need to keep and fetch some information about the user on the server, and that it's really not helpful at all to offload user data into a client side token
I guess you always must unless you want to skip permission validation and stuff.
If you have for instance a service which displays hundreds of images on a page, using cookie authentication which hits the database every time is quite slow.
Using some kind of cryptographic token that can be validated without the database hit is much faster.
Another interesting method, overkill for most applications but absolutely required for end-to-end encrypted apps where no password must be sent to the server (eg: password managers): the Secure Remote Password (SRP) protocol[1].
It's a form of zero-knowledge proof-based verification that the password provided during account creation matches the one provided during an authentication challenge, all without transmitting the password at all. As a bonus, it also act as a key exchange on the client and server, that can be used for securing transmissions over untrusted channels (at the cost of having stateful connections).
SRP was good for its time, but things have moved forward in the password-authenticated key exchange (PAKE) world. See for example OPAQUE:
> Currently, the most widely deployed (PKI-free) aPAKE is SRP [RFC2945], which is vulnerable to pre-computation attacks, lacks a proof of security, and is less efficient relative to OPAQUE. Moreover, SRP requires a ring as it mixes addition and multiplication operations, and thus does not work over plain elliptic curves. OPAQUE is therefore a suitable replacement for applications that use SRP.
> This draft complies with the requirements for PAKE protocols set forth in [RFC8125].
I have been checking out OPAQUE for sometime. But I couldn't find any reliable javascript implementation that I can use in my webapp. Do you know of any such implementations.
There are two scenarios in your "webapp". In both scenarios your connection a client should be secured with TLS (HTTPS)
1. Humans, should be authenticated with WebAuthn. This means now they can't get phished (a major threat today), their credentials can't be stolen (even from you, you only have a worthless public key and an arbitrary identifier), their privacy is protected as best possible, and most likely their platform is also looking out for them (e.g. on an iPhone the fingerprint sensor so their kid sister can't even authenticate to your webapp from their phone)
2. Machines should be authenticated with either public key encryption (mutually authenticated TLS) if you're up to it or random revokable tokens issued by your service for that purpose to a human end user.
PAKEs are most valuable when you do not have a secure connection, but a webapp should have a secure connection over HTTPS already.
It's not absolutely required for end-to-end encrypted apps (that encrypt data with password or use it for authentication). The two benefits of SRP-like (aPAKE) protocols are:
- a password hash (verifier in SRP) is not transmitted during authentication. (But is transmitted during registration and stored on the server). This isn't really required if the secure channel is already established — e.g. TLS — you can as well just send the password hash via TLS, which the server will hash again and check against the stored hash. For registration you already need some kind of non-password based secure protocol to transfer the verifier anyway.
- mutual authentication (that is, client also learns if the server knows the password/verifier). For this case, an app would already come with some kind of mechanism to ensure that the server it talks to is authentic (e.g. TLS with pinned certificates or some other public key protocol with hard-coded keys). So the benefit of verifying that this authentic server is the one that stored your verifier is also limited.
To summarize, there's not much use for SRP in a typical e2e app, since it already needs a secure connection between a client and the server, and SRP verifier is a client-side password hash.
SRP is useful if you need to establish a secure connection with a password and nothing else, but it requires first to store the verifier on the server somehow.
Meh. I'm getting a sense that the author's grasp of authentication and authorization protocols is incomplete. I don't mean in the sense that the author left out some popular protocols, which is true, but in the sense that there's details wrong. For example, oauth and OpenID are mentioned only in the context of social login, and the differences between oauth, oauth2, OpenID and OIDC aren't covered (and OIDC is almost entirely unrelated to the original OpenID). Also, the author has a section at the end "GitHub social auth" as if it were somehow different from regular oauth used all over the internet.
I didn't get past the Basic Auth part where the author said the cons were both that you have to send credentials with every request and that there was no way to log out.
For one, both of those cannot simultaneously be true (as omitting credentials becomes equivalent to logging out).
In addition, all of the methods described require credentials with every request. Some just are stored in the cookie jar instead of the browser's auth handler. But whether it ends up in an auth header or a cookie header, it's still part of the HTTP request.
I agree that this is what the author meant. But is it a true statement? With cookie-based auth, the user can't (except for digging deep into the settings / dev tools) make the browser stop sending a particular cookie either, but the server (or script) can be programmed to ask the browser to remove a specific cookie. Is the same not true for basic auth, i.e., is there no response header the server can send that results in the browser ceasing to include the basic auth creds thereafter?
"With cookie-based auth, the user can't (except for digging into the settings / dev tools) make the browser stop sending a particular cookie either."
I am a user and I can easily stop the browser from sending a particular cookie, without digging into settings or dev tools. This is assuming the operating system lets me run whatever software I choose.
In your case, I'm quite certain you could stop sending basic auth credentials just as easily as you could stop sending a particular cookie.
For the case of a less sophisticated setup, which is to say mainstream browsers with no extensions and users who want to achieve logout by using some link/button rendered inside the viewport, is there a way to convey the logout intent [to the server such that no more than one round trip later] or [to the browser without closing it such that] the browser is no longer including basic auth credentials?
The setup I use is not sophisticated by HN standards, but it does not in any way rely on the application, e.g., the web browser. I do not use the browser itself to try to control the browser's activity. I do not use browser settings, dev tools, extensions, etc., which seem to be the more popular choice.
As I mentioned in the original comment, some websites, like banks for example, will log the user out after a period of insactivity. Whereas tech companies, always trying to collect more data, may encourage users to "stay logged in", ideally forever.
Having session cookies expire could be an option that users could control. The user could choose to have the site log her out after x minutes of inactivity, or some such. The site could agree to "expire" the session cookie on the server side such that it could not be used again. How would the user signal her chosen option to the site. I leave that to the reader. Perhaps an HTTP header sent with the initial log in request: Session-Cookie-Max-Age: [seconds]
With cookies, the site can tell the browser to drop the cookie, change the cookie, or just consider the cookie as unauthenticated going forward. With basic auth the site has no control, and the user has no control.
How basic auth works is a design choice that still baffles me.
Well, not explicitly. But you can respond with a set-cookie header for the same session id, with an expiry in the past. Browsers will purge expired cookies.
It makes no sense whatsoever to "compare" Web Authentication methods for humans in 2020 without even mentioning WebAuthn since that's literally why it's called WebAuthn (Web Authentication) and that's exactly what it's for.
In one sense this is bad news for this comparison, but it's also bad news for the general state of security on the web, because this item is 8 hours old and I'm writing this top level item now, which means for eight hours people who think of themselves as "web developers" didn't even ask "Why not WebAuthn?"
I think that's a bit uncharitable. WebAuthn isn't terribly useful right now simply because most users lack hardware tokens. And from a backend services perspective, which the article seems to take, there's no good place to install hardware tokens for cloud services. If you own the physical server, only then does WebAuthn provide advantages.
I hope that someday, users can just plug their phone into their computer and use it as a security token.
Just tried webauthn.io. Doesn't even work :/ I clicked register and it said "Use your security key now or cancel". What does that even mean? What's a security key? Is it a setting in my browser? I have lastpass installed, can it use that? Is it supposed to integrate with the Mac keychain somehow? Gave me no options but cancel.
Pretty underwhelming if that's the future of web auth..
Or, more practically, if you have a nice phone, try visiting it on that. If it's a relatively modern iPhone or high end Android with a touch sensor running the current OS version you can use that.
Since you mentioned Mac keychain, if your Mac has TouchID that will work on the current OS (in Safari at least) too. Some Windows PCs with fingerprint sensors likewise.
Sure. But biometric authentication for WebAuthn or the related mechanisms built into iOS and Android themselves only takes place on your device. So at the extreme to "stop" you can just replace that device, I understand most people do that every few years anyway.
WebAuthn doesn't end up with a third party (say, Facebook) having biometric data, what they've got is just a public key and an identifier. Your device is signing to say it checked you are still you, whoever that is. It does not promise how it did that and there's no reason a web site would care.
Your biometric data (if that's how you authenticate) is only needed by your device, to verify this is still you when it makes that claim. So any changes (e.g. you decide to use your other hand) are a local device problem, no need to tell any third parties anything interesting happened.
"To mitigate replay attacks (re-use of a sniffed cookie), the value of the cookie used for authentication SHOULD NOT contain the users credentials but rather a key associated with the authentication session, and this key SHOULD be renewed (and expired) frequently."
Session cookies are often defined as cookies that expire when the browser is closed. The truth is that session cookies do not necessarily expire when the browser is closed. Indeed they are lost to the browser, but the user can save them to a text file. If they are truly expired then they should no longer work. However, in some cases, the cookie in the text file can be reused to avoid login, long after after the browser is closed and across subsequent reboots. For example, one such case is a very popular webmail provider. Since the provider now forces users to run Javascript in order to login, this technique can be used to check, read and send mail from the command line using clients that do not include a Javascript interpreter. There are many other examples where session cookies can be used after the browser is closed to avoid having to keep logging in. Unless the website is one that logs the user out after a period of inactivity, there is a good chance "sessions cookies" can be used long-term.
In my mind, of course the server doesn't know if you took a cookie you received from the previous instance of your browser and placed it into the next instance of your browser, because the act of closing your browser doesn't call every server saying "don't consider this cookie valid ever again," the act of closing your browser just deletes cookies that have a lifetime of 0 (but does not delete cookies with a nonzero lifetime, IIRC).
In passwordless authentication, the device creates a public and private key when registered. The private key can only be unlocked using fingerprint or PIN. If an attacker knows the PIN he also needs the device. If he has the device he also needs the PIN.
Mostly. Microsoft seems to really hate just saying "Do this standard thing" and prefer to say "Do this special Microsoft thing †" with an asterisk explaining somewhere that it's basically the standard thing but with a Microsoft logo and also your Microsoft Certified Partner will charge extra.
For example Microsoft used to tell its customers they need a "Unified Communications Certificate" for SSL/TLS. And some companies would advertise specifically that phrase. What's special about UCC? Nothing, it's just an ordinary PKIX compliant certificate using SANs, which is what all the providers are always required to do anyway. But I guess in principle Microsoft felt this way if you bought something else somehow, that's not a "Unified Communications Certificate" so that's why it didn't work.
In the case of WebAuthn Microsoft really wanted to do RSA PKCS#1.5 even though that was obviously at best a weird choice and arguably a bad choice. So the standard says you can do RSA PKCS#1.5 and as I understand it a pure Windows system does, but nobody else wanted that, it's slower, it's clumsier, there is no formal security proof that it's even safe. Everybody else does an elliptic curve signature, I think either ECDSA or EdDSA.
I've built a toy WebAuthn implementation and I literally have no clue whether it works with Microsoft's clients because I was not able to test that yet because they are different from everything else. It fails closed, so if it doesn't work I guess I locked out all Windows users. But all the widely available popular solutions should work.
The "passwordless" described by Microsoft is 2-factor (but is just as convenient as 1-factor). Login link sent via email or text message is only 1-factor.
Good overview, but I'm not sure I agree with the assertion that OAuth/OpenID is unconditionally more secure. It still depends on both the provider and the intermediate site doing things properly, like generating actually random values, not reusing randomness, not leaking tokens, and all the stuff you have to worry about normally.
With the web developer hat on I most prefer working on apps with cookie based authentication since it makes it easy to use the browser to debug/make additional requests. If the cookie already exists I can go to any API url in the browser and it just works. Putting the secret in localstorage or only in app memory makes that not possible.
On the API side I prefer to support both cookie and Authorization header so you get the benefits in browser but don't overcomplicate the CLI side of things by requiring cookie state.
>Tokens cannot be deleted. They can only expire. This means that if the token gets leaked, an attacker can misuse it until expiry. Thus, it's important to set token expiry to something very small, like 15 minutes.
Sure, or your token issuer could implement https://tools.ietf.org/html/rfc7009 (if the JWTs are generated using OAuth) or you could build a revocation system.
But at that point you've negated one of the two benefits of such tokens. (The other one being you can verify that the contents haven't been modified, since they are signed.)
It calls some of these stateless, but don't they all have to run a secondary look-up?
For example, in Basic Authentication, you still have to check the username and password against a database, whether that be a file, a relational database, LDAP, etc. For JWT, to verify the signature you must look up the issuer's pre-shared or public key.
Is it even possible for a request be stateless if it requires authentication?
Basic Auth is stateless on the client side but not on the server side.
Token auth is stateless on server side; it does not need to store any more public/private key pairs as the number of authenticating users increases. It can just use one. So authenticating users does not affect state
Hey auth gurus! I have a newb question for a scenario that this article doesn't appear to cover, so I hope folks don't mind if I ask it here.
Is there any way to authorize a front-end app to use private back-end services without requiring a login? (I have people abusing APIs which were intended to be private, and which may otherwise force me to require user accounts.)
X.509 certificates issued by your private PKI would do the trick.
You'd have to implement a registration / enrollment process during which you'd handle the setup but that'd be a one-time thing (plus a "renewal" process every few years or so).
Although it isn't necessarily the most "user-friendly", pretty much every HTTP(S) client and server in existence supports using certificates to authenticate clients.
As a security nerd, this is what I think I'd prefer, however...
--
An alternative that's probably more popular and "user-friendly" -- and more likely to be recommended (especially here on HN) -- would be to allow users to generate and manage API keys tied to their accounts.
You could then either 1) require everyone to authenticate to the back-end services using their API keys (even "free" users) or 2) make authentication optional but implement strict rate-limiting and/or quotas for unauthenticated requests.
--
EDIT: There are two other similar / closely related methods that I forgot to mention which are quite easy to deal with (both client-side and server-side) and supported practically everywhere (as, if memory serves, they've been around since HTTP 1.0 and 1.1, respectively!): HTTP Basic Authentication and HTTP Digest Authentication. The latter is basically the former with MD5 hashing added, although neither are particularly "secure" nowadays compared to the alternatives. This is much less of a concern if all requests and responses are being carried over a TLS-encrypted session, however.
> Although it isn't necessarily the most "user-friendly", pretty much every HTTP(S) client and server in existence supports using certificates to authenticate clients.
It's also not limited to HTTPS. Other protocols like SMTP and IMAP can use these as well.
One good way to get 2FA is to not only verify the client TLS certificate server side, but also require the user to send their username and password when logging in. Server side, the verified certificate could be checked against the credentials provided to see if they match.
What about having your front end app authenticate with an auth provider to get a limited life token which in turn is sent with you api request? this is typically how I'd adopt a JWT scenario
Coincidentally, just yesterday I was looking for the usual simple in-server form&session Web site authn functionality for a Rust Web framework (and, ideally, also client certs).
I get why OAuth2/OpenID is popular, in a tech professional environment comfortable with leaking information about users, and also desensitized to the risks of a single Web site having numerous third-party service&CDN dynamic dependencies for a page load... but I still expected the basic default authn mechanism to be form&session, and then some of the other mechanisms to build upon some of the same authn (and perhaps autho) session&events&UX foundation.
Session based auth is what we used to do in the olden days. It was horrible. Sure it was easy to log in a keep a session cookie, but man... restarting servers = logged out, unless you persisted sessions. Session replication. It adds all sort of complexities after the initial implementation.
It seems we're at a (local?) maximum with JWT over cookies + refresh tokens + blacklisting.
I'm not sold on refresh tokens being a strict benefit because in the end you still have to maintain state to make sure they can't be used indefinitely.
In the end though, opaque tokens are still the easiest and are way simpler to wrap your head around. Allow calls to redis or the auth server itself early on, and if that extra call really becomes a problem, start doing a push model (where the auth server pushes out updates to others) with caches near every service.
Also, TLS mutual authentication (what you have with client certificates) is impossible with MITM unless both parties agree to the MITM (in which case WTF?) which can help free you from obnoxious management imposed middleboxes at an organisation where it's futile to try to get anybody senior enough to just understand why they're a bad idea.
Suppose you work for HugeCorp. You built a service available on say https://service.huge.example/ and it has a bunch of users who are thus HugeCorp customers. Maybe they are prisons, or fast food restaurants, or whatever, it doesn't matter, except these are clearly not humans. (Client certificate management UX is awful for humans)
Ordinarily sooner or later HugeCorp IT will decide you need a fancy middlebox - from say Cisco, or Fortinet, or dozens of other companies - to tick a box on some executive's list. Ordinarily they just impose the middlebox and, since it can't do its job otherwise, they insist your private keys and certificates get copied to the middlebox. Now it's impersonating your https://service.huge.example/ site and every bug in that middlebox is now a bug in your service. Does it offer any benefits? Probably not really unless you did a very poor job of building the service, but it did tick a box on a list, and the manufacturer got paid. Good luck, have fun.
But with mutual authentication that can't work. They could reach out to every single user of the service and agree that all these users will actually now be separately authenticating to the middlebox. If any of them don't want to, you can't offer the service to them any more. So this won't end up happening, although feel free to propose it in meetings if you want the executives to explode.
Instead they'll have to except your service from the stupid middlebox, and you are freed from wasting your time chasing bugs that are in somebody else's product. Remember to send pity donuts to the teams trying to "fix" such problems in other services that weren't as lucky.
Finally TLS 1.3 makes client certificates work a little better, because it allows a server to give more sophisticated guidance on what sort of certificates it actually wants. In prior versions the only guidance the server is permitted to give is, "I trust these CAs, show me a client certificate they signed".
Still never use this for humans though, the human facing UX is not at all good. Good for machines talking to machines.
> It's stateful. The server keeps track of each session on the server-side. The session store, used for storing user session information, needs to be shared across multiple services to enable authentication. Because of this, it doesn't work well for RESTful services, since REST is a stateless protocol.
What stops you from keeping the JWT token in there? In fact, I doubt that it's some random session ID and not some encrypted payload that gets decrypted instead of looking it up in the database.
Nothing, except that then you're inheriting all the complexity of JWT for not even a pretense of the JWT's supposed benefit of statelessness. You should do the simplest thing that works for your application; usually, the simplest and safest thing is session-based authentication with a random session ID.
It seems like for simple 2-party exchanges, as soon as you are using TLS and assuming you trust it, I'm not sure what the advantage is of doing anything more than basic authentication? Perhaps if you want to throttle authentications to prevent brute forcing it becomes a problem.
For sure, if you have more complex auth needs (more parties, granular access, etc etc) then you can start to justify more complex things ... but I'm curious what other weaknesses are there in that scheme?
Well if you don't want employers finding their users' private passwords in network logs it's worth doing more than basic auth. Inspecting TLS traffic is commonplace nowadays.
I find it amusing that this article starts off explaining the difference between authN and authZ and then immediately proceeds to a describe an authN scheme where the authN transaction utilizes a request header called "Authorization" (used in reply to a response header that doesn't have this quirk, oddly enough). Would be nice to see a footnote apologizing for what I'd consider a linguistic blunder on par with "Referer"...
Basic Authentication is a pain with a web app sitting behind Apache as it strips all "sensitive" headers before passing along the request to the web app.
That's why could should use nginx and cooperate with the app. Maybe 'http_auth_request_module' for subrequesting to the right service, but I'm guessing passing correct headers will could do as well
> Are people still using Apache web server in 2020?
Yes.
In fact, about one-fourth of the "top million busiest sites", 24.6%, are "still using Apache in [December] 2020" [0] -- more than any other web server.
More "active sites" also use Apache (25.6%) than any other web server.
When it comes to "all sites", however, nginx (33.48%) is the most widely used, followed by Apache (27.07%) and Microsoft (7.94%).
For Digest the article says "Credentials must be sent with every request"... but not really. Once you're logged in the server can remember that you are and not ask for it each time. The client can only send it when requested.
Perhaps I am misunderstanding you, but you actually stay 'logged in' by sending an authentication token with each request, usually handled by a cookie in most web frameworks that is all handled for you.
So you actually are sending some sort of credential every request.
Sorry I wasn't clear. With http/2 you establish a connection and then send multiple requests on it. So I wrote an implementation that just asks for the Digest auth the first request on the connection. The server doesn't issue another challenge until the next connection.
I haven't seen such implementation in the wild. "To remember you are you" means switching to another authentication method which is needless complication.
What’s everyone’s preferred auth library for NodeJS based server use? It’ll be interacting with both a React based front-end from desktop as well as a SwiftUI front end from mobile.