Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What do you use for authentication and authorization?
457 points by nates on Dec 27, 2018 | hide | past | favorite | 236 comments
I am currently starting work on a new app/website. Currently planning to have 1 BE API set to start, probably graphql (which will be user data/information and need to check with the auth server about being protected). I will also have many client apps (web, mobile, potential partners) that will need to make queries to that BE. Do you usually roll your own authentication or use something like auth0/fusionauth/gluu/etc? This product is going to need to be secure as it will be in the healthcare space (so think oidc).

Hard to say without more concrete details, but if I had to reply in broad strokes:

- For web, user/pass login exchanged for plain session cookies. Should be marked httpOnly/Secure, and bonus points for SameSite and __Host prefix [1]

- For web, deploy a preloaded Strict-Transport-Security header [2]

- For api clients, use a bearer token. Enforce TLS (either don't listen on port 80, or if someone makes a request over port 80 revoke that token).

- If you go with OpenID/Oauth for client sign-ins then require https callbacks and provide scoped permissions.

- Don't use JWT [3]. Don't use CORS [4].

Again these are broad strokes - if you gave more information you'd get a better response.

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Se...

[2]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/St...

[3]: https://en.wikipedia.org/wiki/JSON_Web_Token

[4]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

> if someone makes a request over port 80 revoke that token

I really like this trick! Not only do you now have a log of "shady stuff" happening, but you've gotten rid of the now compromised tokens instantly!

Can you elaborate on what this means? I'm not too familiar with authentication flows. Why would someone make a request over port 80? Why is that bad?

To kind of combine the others comments and add some more depth:

HTTPS runs over port 443, and port 80 is normally for HTTP only.

Most people setup their webservers to serve HTTPS over 443 and redirect port 80 to 443 so anyone that sends something to `http://example.com` automatically gets forwarded to `https://example.com` (and then the browser can tell them to ONLY use the HTTPS version from that point forward)

This is a generally "acceptable" tradeoff between usability and security.

But when you control the API and the client, you can know that you will never send credentials over HTTP, only HTTPS. So you should have nothing coming over port 80.

Then, with a simple script, you could set it up so that it will accept everything on port 80, and if it includes a token of some kind from your app, it logs the request, then marks the token in your database as "expired" so it can no longer be used anywhere (even on port 443).

There's a kind of attack sometimes called "SSLStrip" which can block all requests for someone to an HTTPS version of the site, and in many cases can trick the client into trying the HTTP version if the HTTPS fails. This kind of thing would not only stop that attack, but it would also log it and ensure that any tokens sent that way would be instantly expired, so that the attacker (who saw the tokens when the client tried to send them) can't use them.

There are other reasons too. It will notify you of any developer-mistakes that are sending credentials over HTTP (it's a one letter difference, and despite the best efforts sometimes these kinds of things slip into a codebase as typos or a dev just not thinking), and it can help you tell "scanning" traffic from "normal" traffic (since nothing in your app will ever talk to port 80, so everything on port 80 is "bad" and can give you some clues into what attackers are trying on your network).

And the best part is that it's easy! I could probably throw something together for our API servers that does this in a day or so start-to-finish. That's a REALLY good ratio between time spent and security benefit that you normally don't get!

Im guessing a downgrade attack where a mitm convinces the client to use plaon http so they can read the plain text credential.

Requests sent to port 80 will (usually) be unencrypted HTTP traffic, hence revealing the secret token to anybody listening in between the client and the server. Someone may accidentally send an HTTP request either by typo or lack of knowledge.

Isn't JWT also a type of bearer token? Could you please provide some more detailed arguments about why JWT shouldn't be used other than linking its wikipedia article?

JWT is fine if "revoke" isn't in your vocabulary for the service. If you do need to revoke tokens, JWT becomes a racey contraption that requires synchronizing and looking up state on every request, the avoidance of which was the main reason to use JWT in the first place.

If you use a realtime transport like WebSockets, you could keep automatically re-issuing a fresh JWT with a very short (e.g. one minute) expiry every 50 seconds; just push them at an interval to authenticated clients. That way your banning mechanism would only have a one minute a delay. No need to revoke tokens; just let them expire.

In such a system, a user would be logged out after one minute of closing the connection... Probably good enough for online banking. In a way, this is safer than standard sessionId-based auth because once you've issued the token, you don't need to worry about scenarios where the user has gone offline suddenly.

There are very few systems that need banning with down-to-the-millisecond accuracy.

What you’ve described is a workflow that will technically work. But I’m weakly confident it’s still suboptimal for most use cases.

How will you deal with the usability problem introduced by expiring all sessions for users who are offline (closed tab, spotty internet connection, etc) for at least 50 seconds? It actually seems like you really should be wondering how to interpret users being offline temporarily.

You could just accept this as working as intended, but you don’t need to accept it if you just use normal session IDs. Is your application really so latency sensitive that it can’t tolerate a DB lookup? What is an example workflow in which users on a web or mobile application cannot be tolerably authenticated with standard DB lookups?

You can also use a refresh token, but this brings you back to the revocation problem, but with longer term tokens. Likewise, there is a material difference between millisecond revocation and sub-minute revocation. There are good reasons to care about sub-minute revocation.

You don't need JWT in this case. You can use a normal token with short expiry and some mechanism to keep it fresh as long as the user doesn't exit the application.

JWT is simpler to implement and more scalable than the sessionId approach so why would you use the more complex solution to get an inferior result?

With JWT, you only need to do a single database lookup when the user logs in with their password at the beginning... You don't need to do any other lookup afterwards to reissue the token; just having the old (still valid but soon-to-expire) JWT in memory is enough of a basis to issue a fresh token with an updated expiry.

It scales better because if you have multiple servers, they don't need to share a database/datastore to manage sessionIds.

I don't know what stack you're working with that makes you say re-issuing JWT every 50 seconds over WebSockets is simpler to implement than the session ID approach people have been using for 20+ years :)

Simpler != cheaper WRT resource consumption. Not having to hit a DB means not having to replicate the DB to respond quickly, and having one fewer point of failure.

If you can live with quickly expiring, quickly reissued crypto tokens like JWT, it's a boon.

But JWT definitely don't work for web auth. They can be used as CSRF tokens, though.

With WebSockets, you also have just one lookup when opening the connection. No scalability issue here.

This is not accounting for reconnect scenarios. With sessionId, if the user loses the connection and reconnects, there will need to be another DB lookup to verify that the sessionId is valid. This is not the case with JWT. The validity of the JWT can be checked without any DB lookups.

Also, this is a security consideration because a malicious user could intentionally send fake sessionIds to make us DoS our database. With JWT, verification of the signature only uses CPU on the workers which are easier to scale.

You can sign session ids to prevent DoS, and you can cache session ids to avoid database lookups, but you can't detect forged or stolen JWT tokens.

You can't forge a JWT without stealing the private key of the valid JWT signer.

You can steal a JWT token the same way you can steal a session token.

Why wouldn’t you rate limit connections? That is something you should be doing anyway, and it neatly resolves the denial of service problem.

Same goes for any signed token scheme. You can still revoke JWTs if you give them an ID and keep a revoke list somewhere. Though as you said most use these to avoid datastore lookups. It's a trade off. Either time-limit signed tokens that can't be revoked with benefit of no lookups or implement revokation.

> You can still revoke JWTs if you give them an ID and keep a revoke list somewhere.

You don't need the ID. You can simply store the token's signature. In fact, some implementations store the whole JWT to avoid roundtrips to the auth service, and revoking the token is just a matter of flipping an attribute in the database.

This kind of comment might make one wonder why not just use a sessionId to begin with but JWT in this case is still useful in a microservice arch because a token which has been revoked from one high security microservice may still be valid for lower security microservices: It gives each microservice the option of deciding what kind of security they need to provide... They may not need any revocation list; maybe the JWT on its own is sufficient; they just keep accepting the token until it expires naturally.

The token expiry determines the baseline accuracy of banning across all services.

> This kind of comment might make one wonder why not just use a sessionId to begin with

JWT and sessionIds are totally different beasts. JWT are used per request, are designed to expire and be refreshed, are specific to each individual endpoint and store authorization info in a specialized third party service.

"No revocation" is a dangerous constraint to have in an authentication session. What happens if a user's token is compromised? You have to either wait for the token to expire (if you implemented expiry) or log out every single user.

That's exactly the trade-off. I'm not going to say it's a big enough negative to dismiss using the stateless signed token scheme because it depends on the needs of the application.

But either way, if you really can't afford a database or cache layer lookup to see if a token is still valid, then you accept that by using a bearer token, that is only validated by signature alone, that it is possible a user will have their session hijacked without possibility of revokation.

The usual way this is mitigated is by use of a small expiry time (I've commonly seen <=5 min) and a revokable refresh token. This still gives a hijacker a possible 5 minutes (assuming 5 minute expiry) if a user revoked the refresh token, but it does mitigate the damage while still reducing DB lookups since you only do a lookup in token refresh. Hope that clears things up. Again your application needs should drive these decisions.

Indeed. However, this is just a building block and not a library solution. Combined with a revocation list you are good. And use something like OpenPolicyAgent to implement it, adding a lot of other possibilities as well.

It's a common practice to add expiry timestamp for such tokens so each token will expire after certain interval.

That's dandy, but it's a solution which is neither standardized nor native to JWT. It's also a weak, passive form of revocation instead of a robust, active form. How do do you revoke a token prior to timestamp expiry?

In 2018 it is fully possible to use authentication libraries which natively support granular control for things like revocation using strong, turnkey cryptography. I would argue most people who think they should be using stateless and signed sessions for e.g. performance are heavily discounting the revocation liability and neglecting to optimize their lookups sufficiently (such as by caching).

Revoking a bearer token is trivial and in all likelihood, revoking tokens is a very infrequent event. In most cases it is such a rare event that you can usually commit your blacklist to source code.

If not a service to validate tokens against a blacklist is again trivial and will scale to all but the top 0.1% of organizations. And it only needs to be in the blacklist long enough for the period until the token expires.

Yes, jwt is not ideal. But this talk that you should never ever use them and your service will be immediately hacked etc is silly internet bandwagoning.

For a huge percentage of services jwts are just fine. Anyone reading this, please do not over think this advice and just ship with jwts if that is what you have.

> Yes, jwt is not ideal. But this talk that you should never ever use them and your service will be immediately hacked etc is silly internet bandwagoning.

I never said you should never ever use JWT or that your service will be hacked if you do so. In fact, if you kindly reread what I wrote you'll see that I explicitly mentioned there are legitimate use cases for JWT. I am specifically refuting the use of JWT as an authenticated session management system.

> Anyone reading this, please do not over think this advice and just ship with jwts if that is what you have.

This is poor advice.

1) Authentication is sufficiently solved for most workflows and applications that you can use turnkey solutions for more secure and more performant authentication than JWT.

2) What exactly is the scenario you envision in which JWT is all someone has? Do you mean they're forced to use stateless session management, or that JWT is literally all they can do for authentication because nothing else is available?

> What exactly is the scenario you envision in which JWT is all someone has? Do you mean they're forced to use stateless session management, or that JWT is literally all they can do for authentication because nothing else is available?

Good luck using session cookies with Cordova on iOS, for example [1]. In cases like these JWT is perhaps your only option.

[1] https://issues.apache.org/jira/browse/CB-12074

> That's dandy, but it's a solution which is neither standardized nor native to JWT.

That statement is false.

JWT were specifically designed to store a payload JSON object which among the many standardized fields include the token's expiry time, and JWT were specifically designed with a workflow which includes not only client-side token refreshing but also server-side token rejection that triggers client-side token refreshes.

In fact, JWT token refreshes and token rejections feature in any basic intro tutorial to JWT, including the design principle that tokens should be discarded and refreshed by the client as soon as possible and also the use of nonces.

No, it's not false. Tutorial "best practice" guidance does not constitute a standard. JWT does not provide native revocation. Neither refreshes nor expiry constitute revocation. Revocation is an active state change, not a dead man's switch.

Yes, that's patently false.

The exp payload field is even specified in JWT's RFC along with the token rejection workflow.


The same document also specifies the jti field which is the JWT's nonce.

Again, expiry is not revocation. This is an uncontroversial fact - if you disagree, please advise me as to how you'd revoke a token prior to its timestamp-mandated expiration without augmenting it further.

And the jti field is not intended for what you think it is. Anti-replay is not at all the same as revocation. Those are different things entirely.

I certainly believe (and have seen) the jti field used in the manner you describe. But no, that workflow is not intended for revocation. Which makes sense given the design intentions of JWT, because anti-replay can be accomplished as a stateless process, while revocation cannot.

You replied to this:

"It's a common practice to add expiry timestamp for such tokens so each token will expire after certain interval."

With this:

"That's dandy, but it's a solution which is neither standardized nor native to JWT."

People are providing evidence that token expiration is native to JWT to refute that statement, while you are arguing in parallel that "expiry is not revocation" which is related but separate.

> Again, expiry is not revocation.

Issue and expiration timestamps are used along with nonces to enforce single use tokens. Once a token is used then the client is expected to discard and refresh the token.

Implementations are also free to keep track of issued tokens and that does not pose any problem in the real world.

> And the jti field is not intended for what you think it is. Anti-replay is not at all the same as revocation.

Why are you expecting to revoke a token in a scenario where the token is supposed to be used once?

Either the token is deemed valid and accepted or it's invalidated and rejected, which triggers clients to refresh the token and retry the request.

> Why are you expecting to revoke a token in a scenario where the token is supposed to be used once?

An attacker was able to somehow issue a bunch of tokens for himself. Now you want to invalidate them even though they're not used yet.

> Either the token is deemed valid and accepted or it's invalidated and rejected, which triggers clients to refresh the token and retry the request.

The other point here is that you are probably (not always, not in every possible case, but in most common cases) better off using just a bearer token (refresh it on every use if need be). There's no performance benefit in using stateless tokens when they can be used only once, and handling bearer tokens is much easier from a gun-to-shoot-your-feet-with perspective.

> An attacker was able to somehow issue a bunch of tokens for himself

I'm not expert in JWT and just jumping in here, but wouldn't that imply total compromise of the PKI if this ever happens?

I'm saying, if this scenario comes to pass, with basically any old authentication system, isn't it now time to roll the master keys and invalidate _every previously issued_ token/session the old-fashioned way, by disavowing the prior signing key, and then bouncing every user/ requiring to re-auth freshly and establish brand new sessions within the totally new PKI?

I assume this is always still possible even with JWT from what I've read so far, but I'm happy to be educated if either of you don't mind sharing.

> I'm not expert in JWT and just jumping in here, but wouldn't that imply total compromise of the PKI if this ever happens?

Not necessarily. Let's say I steal your password and use it against the auth endpoint to get 10 one-time tokens for your account. Re-rolling the master key is a solution, but a very radical one if I can just invalidate all your tokens don't you think? ;)

> Not necessarily. Let's say I steal your password and use it against the auth endpoint to get 10 one-time tokens for your account.

The tokens are valid, thus there is no objective reason to reject them other than there was an unrelated security failure elsewhere in the system.

Additionally, tokens are generated per request and are short-lived, with an expiration timestamp that is just enough to send a request to the server.

When the token is passed to the server, the nonce is added to the server's scratchpad memory to revoke the token and thus avoid replay attacks. If anyone for some reason wants to revoke a token, they only need to add the token's nonce to the revoked list. If the nonce is present in the list then the server rejects the token and triggers a token refresh.

I'd argue that people who think they should be using caching are heavily discounting the consistency issues they will encounter (no doubt at the least convenient time), and may well end up reintroducing the same problem they're trying to solve. If you have revocable tokens accessed via an authentication lookup cache with a 5-minute expiry then you've spent a lot of time and engineering effort to have exactly the same problem as if you had non-revocable JWTs with 5-minute expiry.

> In 2018 it is fully possible to use authentication libraries

So — getting back to the OP — which libraries?

NaCL, Fernet or Paseto.

you can also blacklist existing tokens - but that's not without it's own drawbacks https://auth0.com/blog/blacklist-json-web-token-api-keys/

I thought the problem with JWTs was the whole “stateless JWTs”.

Tptacek shits on them every time it comes up. Unfortunately I can never quite comprehend what he says to do instead.

He suggests KISS: you can probably get away with plain old server-side auth, and if you really need client-side tokens, use something simple that just encrypts and signs them: https://news.ycombinator.com/item?id=13612941#13615634

> Something simple that just encrypts and signs them

Like JWT?

I feel like that argument goes around in circles.

> I feel like that argument goes around in circles.

I feel that the problem is that some users are talking about stuff they know nothing about, but still feel compelled to be very vocal and opinionated.

I can elaborate a tiny bit. It's been mostly a rocky road in library development as well as some confusion in the jwt specification. Basically the JWT spec is poorly designed for lay-programmer use and some folks are implementing the spec wrongly or are just configuring their systems that use properly-implemented libraries in dangerous ways. For instance you need to choose the algorithm carefully and then be careful not to accept any other specified algos as it can cause some interesting attacks (specifying symmetric algorithm when the token was meant for asymmetric ones can lead to valid signing using the public key if the system allows it). Also Technically a user can specify a "None" algorithm that doesn't do payload verification, which tbh all backends SHOULD drop tokens specifying this.

JWTs as bearer tokens aren't bad in their own right, but if you aren't careful you can screw yourself and therefore many security experts avoid them for use in securing systems. Plus a lot of people mistake it for an encrypted token which it isn't. You can imagine how bad that can get.

Tbh I'm with the parent commenter. I avoid them, but if you avoid common pitfalls they should work for your system no problem.

I'm on mobile and can't be arsed to gather sources, but you can search the claims I made and you'll see several articles about these problems. There's even a defcon talk about a new proposed standard (called Paseto I think) that starts by highlighting the major issues with JOSE and JWT specifically.

Also (separate post for separate replies), why not use CORS? this is the first I'm hearing about this. SPA websites often use things like JWT and CORS (ours included)

The author hasn’t clarified yet, but I suspect what they’re referring to is the fact that CORS does not support granular access control. If you make something public under CORS, any client can retrieve the resource if no other authorization or authentication check is in place. It’s not a system of authentication, it’s a system of authorization - specifically, for authorizing hosts to request resources which normally wouldn’t be authorized to do so under same origin policy.

As a concrete example: people occasionally misuse the Origin header, thinking that they can use it as a form of client authentication. The idea is that any client request from a non-whitelisted origin will fail. But any user can spoof their own Origin header, and the Origin header is primarily intended to protect users from making CORS requests they didn’t intend (because in most cases an attacker cannot coerce a browser to forge a header).

CORS is not a tool to turn resources private, but to protect the browser (not the server's content) from cross domain requests.

Exactly, the attacker can always not use the browser and emulate a browser request if motivated enough.

Yes, that's precisely why CORS is a poor fit for authentication :)

Sure, but I don't see why the tip in OP is "don't use CORS". To me that implies there is actually something insecure about using it.

Yeah you can use CORS securely, there are just pitfalls to look out for.

Wondering about the same thing.

Some people have performance concerns with CORS is the main reason I believe. The overhead is an extra round trip.

I thought the concern was security.

Anyway HTTP2 would hopefully address that (through header compression), and things like zero-RTT TLS and keep-alive further minimize the overhead of an additional request.

Plus doesn't CORS only make preflight requests periodically, not for every request?

I wrote/research a lot about http/2, and even has a small tool for it (https://http2.pro).

Among many things you get from http/2, it cannot eliminate round trip time. Sure, you can keep a connection alive but that's possible with http 1.1 too.

Header compression is HPACK. If the header changes even the slightest bit, it's not cached. Dynamic URLs and headers can easily bust HPACK compression.

Preflights are cached, but because CORS is per-URL caching can be of limited value. If your API uses `/info` and `/edit`, a preflight request has to be made for both (assuming a preflight is necessary). If your application has dynamic URLs (e.g. `/widget/1`, `/widget/2`, etc.) the problem is exacerbated even further.

Isn't the argument against JWT mainly one against using it with weak algorithms, and not something inherent to JWT itself?

No, there are several common arguments against JWT for session tokens. The major one intrinsic to JWT is that it has no system of revocation. Thus instead of using a turnkey solution you need to add an additional layer of state logic to your authentication code if you want to be able to revoke tokens.

It is also correct that JWT 1) supports far more cryptography than is necessary; and 2) supports weak cryptography. You can do better than JWT for session management security and performance merely by generating pseudorandom tokens, associating them to sessions and performing lookups.

More generally speaking: signed, stateless tokens are attractive for a variety of technical reasons. They have legitimate uses. But it's typically a poor security decision to choose them in lieu of revocation, for reasons which are mostly uncontroversial among those who work in security.

> No, there are several common arguments against JWT for session tokens. The major one intrinsic to JWT is that it has no system of revocation.

That's technically false. JWT features multiple systems of revocation, including the use of nonces. Token revocation also features prominently in JWT's basic workflow.

The key aspect is that there is no turnkey implementation, and thus projects need to roll their own implementation, which is frowned upon some developers.

By "intrinsic", I meant precisely that there is no JWT standard which admits native revocation. It naturally follows that no JWT implementation provides a turnkey solution for revocation, because it's not intended to.

JWT is stateless. Revocation is stateful. This is a fundamental tension in both cryptography and access control. Yes, you can retrofit your stateless authentication system with a stateful revocation system. But at that point you're back to square one and the architect working on this should consider why they're undoing the legitimate benefits JWT provides.

Nonce based revocation is an active process. Timestamp expiry is not actually revocation, it's expiry. If your token is compromised prior to expiry, you're out of luck.

> By "intrinsic", I meant precisely that there is no JWT standard which admits native revocation.

That's patently false.

JWT's basic workflow features token refreshes, issue and expiration timestamps, and even nonces, and the backend workflow also supports arbitrary token rejections to trigger token refreshes.

The only aspect of JWT's workflow that is left as an implementation detail is tracking revoked tokens.

> JWT is stateless. Revocation is stateful. This is a fundamental tension in both cryptography and access control.

This sort of argument is ivory tower nitpicking stated disingenuously. JWT include issue and revocation timestamps, which already renders the workflow stateless. The only stateful aspect, which is silly nitpicking and technically irrelevant, is keeping track of nonces and arbitrarily revoked tokens, which require keeping a database to track revocations.

We’ve been going back and forth like this for quite a while, so at this point I doubt I’ll be able to convince you with further explanation. I’m shocked you think it’s “ivory tower nitpicking stated disingenuously” to call stateful tracking of nonces what it is - “stateful.”

I’ll recuse myself from further “nitpicking” I suppose, because this isn’t going anywhere. If you’re interested in actually following why your suggestions are a poor fit for session authentication, I’ll direct you to this flowchart: http://cryto.net/%7Ejoepie91/blog/2016/06/19/stop-using-jwt-....

If you have to track revoked tokens you might as well track active sessions via a session ID.

That's just an argument ignoring the realities of scale. In any reasonable system the number of tokens that need to be held in blacklist until seen will be tiny in comparison to active sessions.

How does it matter how many tokens are in the blacklist? You're looking them up in a DB where the lookup time in lg(n) anyway. To give you an idea of how little it matters, let's say a small blacklist would be 10k tokens while a list of all tokens would 10M. log(10k) = 13.28. log(10M) = 23.25. It's only marginally more, because the main latency of the DB request is the network round-trip time.

The actual issue here is that a lookup needs to be performed at all. For every request, you need to pay the latency of one DB round-trip as well as maintaining code that does this lookup. And if you're going to do that anyway, why bother with this complexity of "stateless" tokens?

  How does it matter how many tokens are in the blacklist?
If you're authentication something internal to a company, like the link between the website and the order status backend, there may be literally one user with one token.

In this case, the list of revoked tokens will take little space, and update very rarely!

If you're authenticating users logging into your website and you decide user logouts should be implemented by token revocation, you're going to have a great many revoked tokens - perhaps within an order of magnitude of the number of active users you have.

I suspect a lot of the disagreement here is between people who are thinking of different situations.

What you’re describing - a microservice architecture - is actually a legitimate use case for JWT. I would say that’s an example of sound authentication, but it’s not session authentication, which is what’s being talked about here. Microservices authenticating and communicating with one another don’t utilize the concept of sessions in the sense that clients (users) and servers do.

For that reason I don’t know that it’s fair to say the disagreement throughout this thread is due to people talking about different things. Microservice authentication notwithstanding, session management is not optimally handled by JWT.

I think the answer is supposed to be that you've done your architecture wrong if you ever allow a revoke list to grow as high as 10k or beyond. You should not have to grant very many long-lived JWT tokens to begin with, so for most revocations it should always be enough to simply let them expire.

If the token blacklist is budgeted and never allowed to grow to a size of more than say 10-200, then it can probably be safely maintained over the lifetime of the project in a way that doesn't require a round-trip, in the source code for the service or otherwise gated behind a release barrier.

I don't know if I agree with that (I've never implemented JWT) but at least I think I've heard of the idea that's how the architecture is supposed to be planned for JWTs.

> If you have to track revoked tokens you might as well track active sessions via a session ID.

No. Tracking revoked tokens is only necessary if for some reason a server wants to reject a valid token, and that's only required until the token expires.

The use of nonces to avoid replay attacks is also a widely established practice, thus we're not talking about extra infrastructure.

Tracking revoked tokens also doesn't take up any resources as tokens are designed to be short-lived.

What’s the difference between a Bearer Token and JWT? I thought they were related?

A bit of misinformation in this side thread.

A JWT token is composed of three parts: a header, payload, and signature.

The problem is that people can put sensitive info in the payload.

None of it is encrypted, it's only signed with HMAC.

Unless you're keeping track of the tokens, once a token is issued it's valid until it expires, due to it's stateless nature.

You can use a JWT as a Bearer token, but since it's only base64 encoded, you can pull out that payload data.

A truly opaque Bearer token will be meaningless to anything other than your server.

Play with the debugger here to see what I'm talking about: https://jwt.io/

A bearer token is opaque. It could be a JWT, it could be something else, depending on the application.

In essence, a JSON Web Token (JWT) is a bearer token. It's a particular implementation which has been specified and standardised.

JWT in particular uses cryptography to encode a timestamp and some other parameters. This way, you can check if it's valid by just decrypting it, without hitting a DB.

Not all bearer tokens have this property.

correct me if i am wrong, but if your backend and front-end run on different ports and you are developing locally using chrome, you have to use CORS to make any non GET requests

What we do is make the frontend server(eg ng serve) proxy request to backend during development.

Does sameSite mean you don't need to worry about anti-csrf tokens, or does it just augment it?

SameSite cookies can eliminate threats from cross domain requests. The strict mode is good enough to even block cross domain regular GET requests too.

However, I wouldn't throw other anti-CSRF measures away because if the attacker can use a stored XSS vuln, they can still make their way to a CSRF as well. Besides that, not all browsers support SameSite flag yet.

If you have an API, you can program your web client like an API client, using bearer tokens for authentication (put them in local storage). It's probably better than cookies.

> It's probably better than cookies.

Why do you think so? I would guess it's a tradeoff about what you think is more likely to happen. XSS or CSRF.

Local storage (and session storage) is vulnerable to XSS. Use a strict content security policy and escape (htmlspecialchars in php and similar functions in other languages) output to combat that.

Cookies are vulnerable to CSRF but can't be read from JS if they are http only (no XSS). To combat CSRF most frameworks already have built-in csrf token support. In case of a API use a double submit cookie. Frameworks like AngularJs/Angular support that out of the box. Also use the secure flag SameSite and __Host prefix [0][1]

[0] https://www.youtube.com/watch?v=2uvrGQEy8i4

[1] the slides from the video: https://www.owasp.org/images/3/32/David_Johansson-Double_Def...

If you mean that HttpOnly for cookies protects against XSS, you are mistaken. The attacker will simply generate requests to the secure endpoints rather than steal the token and use it from somewhere else. HttpOnly does not really protect you against XSS at all.

With "no XSS" I meant a XSS exploit doesn't allow access to the data stored in the cookie. I didn't mean it would protect against XSS. Poor/lazy wording on my part, sorry.

It's true that a attacker simply can generate requests from the XSS'ed browser, my understanding was that the session/token is more valuable to an attacker then only an XSS exploit.

However it seems that someone in the past had the same understanding as me and tptacek disagreed [0]. Oh well. Also reading the linked article [1] (are you the author since you use the same wording?) and it's linked articles it seems both cookies and webstorage are not ideal solutions, but local storage might be preferable since CSRF is not a problem, so one thing less to worry about.

[0] https://news.ycombinator.com/item?id=11898525

[1] https://portswigger.net/blog/web-storage-the-lesser-evil-for...

How is that better than a cookie though? Cookies already provide automatic storage and expiry mechanism. Bonus feature is that they are not accessible by JS code at all, if set httponly flag.

Browsers automatically attach cookies to HTTP requests, opening the door to attacks like CSRF.

The security impact of automatic client-side expiry is tiny, since token expiration must be done server-side anyway.

The HttpOnly flag as an XSS mitigation is almost useless; competent attackers will simply run their code from the victim's browser and session. To protect against XSS, HttpOnly doesn't really help you at all. You should be setting a CSP that prevents inline and 3rd party scripts by default, and whitelist what you must.

Overall, cookies may seem like they have a lot of security features, but in reality they are just patches over poor original design. IMHO, using local storage is probably better, because there's less room to get it wrong.

If you use cookies as a storage mechanism and ignore the cookie header on your backend, you close the door to CSRF attacks.

Here's one glaring problem with local storage: literally any script on your page can access it (for example, vendor scripts). Cookies can only be accessed by scripts from the same domain from which they're created.

That's true, but if you run untrusted scripts on your site it's pretty much game over, anyway.

Why should those scripts limit themselves to stealing tokens when they can send authenticated requests from the browser? To put it another way, why would you care about knowing the root password when you have a way to run a root shell at will?

It's interesting that every time this comes up people talk as though the only vector for a malicious script running on your site is you serving it yourself. A reminder that browsers have a ridiculously lax permissions/security model for extensions which extension developers have been shown again and again to abuse (see the Stylish incident for instance).

How can they send authenticated requests if they can't access your cookies and your backend ignores the cookie header?

The scenario is XSS, where the attacker manages to run their JS code on your page, and get all the same privileges as your own code on the page. Whatever mechanism your own JS code uses to perform authenticated requests, the attacker can do the same.

That is not the scenario you described (running untrusted scripts on your site). Cookies are not protected from XSS, but they are protected against malicious or compromised vendor/CDN scripts and browser extensions. Local storage, however, is vulnerable to all of the above.

It seems that you are saying that cookies are more protected from third party code than your own code. That is incorrect.

Let's get specific: let's say you have a page on mysite.com. When a user signs in, the server sets a HttpOnly session cookie to authenticate later requests from the user.

Now let's assume your page loads evilsite.com/tracker.js. The code in tracker.js can now send requests to mysite.com, and your HttpOnly session cookie will be sent. There is no extra protection for cookies that would check if the JS code doing the sending came from mysite.com.

Obviously tracker.js cannot read the value of your session cookie (and, indeed, neither can your own code), but mysite.com is more or less totally compromised.

You're describing CSRF, but again: this vulnerability doesn't exist in the scenario I'm describing.

If you don't set HttpOnly on your cookies and ignore the cookie header on your backend (i.e. only use cookies for storage, not for transport), cookies are strictly better than local storage, since the only difference between the two is now local storage's lax access policy.

The scenario you're describing can also be solved by using a CSRF token retrieved from the backend. Meanwhile, there is literally no way to secure secrets kept in local storage from third party scripts.

No, I'm describing XSS. You know, where an attacker injects scripts on your pages, and the attacker's requests come from the same origin as your own requests. In CSRF, the attack is hosted outside the target site.

I don't believe a situation exists where using cookies for client-side storage is more secure than local storage. Could you please explain this in more detail?

Your page is mysite.com. When a user signs in, you save some sort of session token to a non-HttpOnly cookie. Your backend ignores the cookie header, and you send the session token as a different header with every request. (Basically, the same way you'd authenticate if you were to use local storage).

Now assume your page loads evilsitem.com/tracker.js. They can send requests to your backend, and the cookie will be included, but since your backend ignores the cookie header it doesn't matter. The malicious script, however, cannot access the cookie directly, since the script's origin is not the same.

That's why I say cookies used this way are strictly more secure than local storage: the fact that they're included with every request is irrelevant, and they're protected from direct access by third party scripts. Local storage is not. Even if you use JWT for auth, you should still store it in a cookie.

You are misunderstanding how the same-origin policy applies to scripts. If a page on mysite.com loads evilsite.com/tracker.js, then tracker.js runs with the same "origin" as the rest of the page that loaded it. The script has all the same access, including document.cookie, as a scripts loaded from mysite.com. Try it.

The same-origin policy only limits access between windows and frames. All scripts loaded on a page will have the same "origin".

> competent attackers will simply run their code from the victim's browser and session

What do you mean? JS even on the same page can't read HTTPOnly cookies. If you are assuming that the browser has been hacked then it is pretty much game over regardless of what you use.

We are talking about XSS, where an attacker can run their JS code on your page. If the attacker can run JS on your page, they can already do whatever your signed-in user can do. No need to read the cookie to make authenticated requests, just like your own code doesn’t need to read the cookie.

This sounds an awful lot like JWT.

In what way?

You simply get a bearer token (non JWT) onto the client and use that from local storage instead of a cookie.

Your JavaScript code then makes api calls using the same bearer pathway as other api clients.

The token can still expire, be revoked, etc. it just prevents you having to handle cookie auth on your api.

You've just described the JWT workflow.

No, they've described a Bearer Token workflow. JWT is a specific method that also (most times) uses Bearer tokens, but it wasn't the first, nor does it have a monopoly on Bearer tokens.

I remember building a service when I was experimenting with web development that used randomly generated tokens in a custom HTTP header, and that is closer to Bearer Token (the standard) than Bearer Token is to JWT.

You're trying to be disingenuously pedantic. It's irrelevant if the workflow is specific to JWT or is shared by other bearer token schemes. The point is that JWT, which is a bearer token scheme, follows that workflow, thus it makes no sense to present that workflow as an alternative to the JWT workflow, as it's precisely the same.

> ... as it's precisely the same.

If you believe JWT is "precisely" the same as mere presentation of a token, then you're woefully ignorant of JWT.

> ... it makes no sense to present that workflow as an alternative to the JWT workflow ...

But that's not what happened, is it? In fact, it's the opposite. As I read it, [1] suggests a bearer token workflow, to which [2] replies that the suggestion is "an awful lot like JWT", whereupon [3] clarifies that the original suggestion is just a normal bearer token scheme, which, I claim, shares nothing with "JWT" except the "T".

> ... JWT, which is a bearer token scheme ...

The "T" in "JWT" is the least interesting bit of JWT, and merely a necessity.

> It's irrelevant if the workflow is specific to JWT or is shared by other bearer token schemes

When not talking about any specific bearer token scheme, it is absolutely relevant. Only the generic point was under discussion, until JWT was introduced. JWT is not just another bearer token scheme. It comes with its own additional obligations, restrictions, and extra steps, not to mention the purpose-defeating pitfalls.


[1]: https://news.ycombinator.com/item?id=18768173

[2]: https://news.ycombinator.com/item?id=18768212

[3]: https://news.ycombinator.com/item?id=18768242

> JWT is not just another bearer token scheme. It comes with its own additional obligations, restrictions, and extra steps, not to mention the purpose-defeating pitfalls.

Care to provide an example?

Which is fine to use the same logic, as it’s a robust, easy to understand system. but if you aren’t using JWT then you aren’t using JWT.

The parent comment saying “that sounds like JWT” is implying it’s just as bad or has the same shortfalls as using JWT.

The big objection to JWT is that it's a bearer token with no revocation support. If you're going to implement a bearer token with no revocation support, or a custom revocation implementation, anyway, then the criticisms of JWT apply just as much to the system you're building and you might as well just use JWT.

> The big objection to JWT is that it's a bearer token with no revocation support.

That statement is not true. JWT do support revocation. In short, servers are free to reject any token, which triggers a token refresh on the client-side. Token revocation is even an intrinsic aspect of JWT as they suport issue and expiry timestamps, along with a nonce to avoid replay attacks.

It seems some users have an axe to grind regarding the idea of having to keep track of some tokens that were poorly designed (i.e., absurdly long expiry dates without a nonce) but the solution quite obviously is to not misuse the technology. In the very least, if a developer feels compelled to use a broken bearer token scheme that does not expire tokens based on issue date then quite obviously he needs to keep a scratchpad database of blacklisted tokens to compensate for that design mistake.

> In short, servers are free to reject any token, which triggers a token refresh on the client-side.

Servers can of course implement whatever custom behaviour they desire, but the protocol itself (and common implementing libraries) does not have any direct support for revocation.

Furthermore, any revocation implementation will inherently have to compromise the statelessness that is JWT's most prominent selling point.

> Token revocation is even an intrinsic aspect of JWT as they suport issue and expiry timestamps, along with a nonce to avoid replay attacks.

JWT does indeed support expiry and nonces. But these are not the same thing as revocation.

> It seems some users have an axe to grind regarding the idea of having to keep track of some tokens that were poorly designed (i.e., absurdly long expiry dates without a nonce) but the solution quite obviously is to not misuse the technology. In the very least, if a developer feels compelled to use a broken bearer token scheme that does not expire tokens based on issue date then quite obviously he needs to keep a scratchpad database of blacklisted tokens to compensate for that design mistake.

Insults and "obviously"s are not a good way to convince people of your point of view.

> Servers can of course implement whatever custom behaviour they desire, but the protocol itself (and common implementing libraries) does not have any direct support for revocation.

That's patently false. The protocol does support revocation. In fact, its basic usage specifically states that servers are free to force the client to refresh its tokens by simply rejecting it. If a JWT is expected to be ephemeral and servers are free to trigger token reissues, what lead you to believe that JWT didn't supported one of its basic use cases?

> Furthermore, any revocation implementation will inherently have to compromise the statelessness that is JWT's most prominent selling point.

That's false as well for a number of reasons, including the fact that JWT use nonces to avoid replay attacks. And additionally JWT's main selling point is that's a bearer token that's actually standardised, extendable, provided as a third party service, and is usable by both web and mobile apps.

> JWT does indeed support expiry and nonces. But these are not the same thing as revocation.

Expiration timestamps and nonces automatically invalidate tokens, which are supposed to be ephemeral, and nonces are a specific strategy to revoke single-use tokens. As it's easy to understand a bearer token implementation that supports revoking single-use tokens is also an implementation that supports revoking tokens, don't you agree?

> Insults and "obviously"s are not a good way to convince people of your point of view.

Perhaps educating yourself on the issues you're discussing is a more fruitful approach, particularly if you don't feel confortable with some basic aspects of the technology and some obvious properties.

Also, people make a big stink when authentication cookies aren’t marked as HTTPONLY. Storing tokens in localstorage (even sessionstorage) is just as bad but for some reason more accepted.

Stealing tokens from localstorage or cookies means the attacker can run code in the user's security context. Why would they limit themselves to stealing tokens? Using them outside of the browser would be stupid, anyway, as it would risk tripping reauthentication, IPS, or whatever.

HttpOnly is a joke, and people should stop claiming it helps with XSS. It does not help. It's security benefit is at most neutral. In fact, people often seem to think that it prevents XSS, and get lulled into a false sense of security. For that reason, HttpOnly seems to be worse than neutral.

Persistent access via an authentication token is a hell of a lot more reliable than relying on the user not navigating from/refreshing a specific page where XSS is present.

Where does Amazon’s sigv4 fall into play?

Why not JWT?

An oversimplified version of the arguments against JWT for session management (as well as the JOSE specification for signing and encryption) ...

1. The specification has points of ambiguity that have led to a number of flawed implementations. 2. JWT is saddled with unnecessary complexity which also contributes to recurring implementation flaws. 3. JWT increases the complexity of session revocation in contrast to a simple, stateless session ID.

The arguments and counter-arguments are a bit more involved, but be aware that by the time you account for the downsides, you may have negated the value you hoped to gain from stateless web tokens.

If you can use a simple session id, use it. If you need JWT to support external authentication providers, use a short expiration and swap the (fully verified) token for a session id.

Healthcare is a dangerous sector for a security novice. Please make sure you are familiar with HIPAA [0], including your obligations when handling health information and the nature of possible sanctions. Handling health data at all is risky. Sharing it with partners is something you probably shouldn't even consider before you can afford a serious legal team.

OpenID is a mechanism for one website to assert a user's identity to another website. OAuth is a way to let a user delegate access to some of their data on one site to another site. Neither have any particular affinity with the healthcare space, and they are not things you sprinkle on for extra security.

[0] https://www.hhs.gov/hipaa/for-professionals/security/index.h...

I've had an idea for a product I've put on hold for two years because it involves medical data and I just don't know if I can secure it to a level I'd be happy with from a moral point of view.

That's before the law gets involved as well.

Yeah... HIPAA is definitely tough. I'd check out https://www.aptible.com if you haven't already. It will at least help out with the infrastructure side of things. Although it does seem like Heroku is offering some services that help too (https://blog.heroku.com/announcing-heroku-shield).

It's definitely not enough alone, but at least gets you going on the security & compliance aspects.

I'm in the UK and our rules are different, we don't have anything directly equivalent to HIPAA (I suspect because we don't currently have the huge number of private hospitals/doctors the US has) in fact even finding out the exact standards you'd have to comply with for the UK is a challenge.

GDPR is good in that regard as the standards are high and apply to more than just electronic storage/interchange.

People have to follow the Data Protection Act.

Are these useful?

Here's the Code of Practice for NHS organisations and staff: https://www.gov.uk/government/publications/confidentiality-n...

Here's the other code of practice for everyone working with NHS data: https://digital.nhs.uk/data-and-information/looking-after-in...

And here's the guidance about when to share if it's needed: https://digital.nhs.uk/data-and-information/looking-after-in...

Makes sense. I am sure I misworded, and got turned around a bit. Much of the documentation with fhir talks about oidc. Which seems to be in place if you are doing much more sharing of your data. These things as you mention are probably beyond what is necessary initially and could be added at a further date. However using a service or an open source project that can allow to scale to that size is an interesting proposition.

HIPAA applies to all health data regardless of what you do with it. It’s one of the few things similar to ITAR that you cannot put off for later. The fines for not complying can be staggering ($50k-$1.5m).

I highly recommend talking to someone who knows HIPAA well.

If you are handling any kind of medical data about people, then you cannot think about security at a future date and your life will be difficult from the start.

Did I say security? I said sharing at a future date.

I would echo a bunch of the comments in this thread. I am only posting this because I own a healthcare startup and your question put the fear of god in me. I would strongly advise that you build a different app if you do not already possess the knowledge required to do this. If you have the means hire a professional and you will be better off. You need to get this right from day 1 or people will get hurt, you will be fined into oblivion and/or possibly thrown in jail. There are also numerous other security concerns that you should be worried about since Auth is just the tip of the iceberg.

I will add the following details which are specific for healthcare companies and a bit of inside baseball. Typically people use an email address as the lookup field for a user account. Since email addresses are considered protected health information under HIPAA, I would highly suggest you use usernames instead (possibly auto generated to be safe). HITRUST includes some details about password rules for their certification process. No one will question you if you follow their rules (I think it's 12 characters minimum 1 Upper/Lower/Symbol/Number each). Use a banned password list, you can find one here[0]. You are going to want to set up some manner of 2 factor authentication (I would recommend U2F or TOTP) for all accounts with manual code backup. OAuth and OpenID are goddamn nightmares. You need to own and manage this process entirely by yourself.

[0]: https://haveibeenpwned.com/Passwords


Hire an expert.

If you can't answer these questions yourself (which is fine - it's specialized knowledge separate from the skillset needed for building a useful application), you are lacking critical competence for coding anything handling health information.

The security minefield is much much bigger than the login page.

I wanted to disagree; how did those professionals become pros? Learning by doing , most of them , after all.

Then I read the last paragraph of the question.

Please follow this guy’s advice. As someone whose medical data you might one day be handling: please get someone who does this well.

Imagine you (or a family member) ever end up sick; your medical data ends up on Pastebin, and the arstechnica article about it surfaces a forum post from the engineer responsible: “hey guys howto auth?”. Honestly: how would you feel?

> how did those professionals become pros? Learning by doing, most of them, after all.

I agree with your conclusion (especially in a comment down the thread about nurses), and just wanted to add to something you said, because I often come across this sentiment that learning by doing is how professionals become what they are, and wanted to play with that idea. In tech, this sentiment is often reinforced by stories like Elon Musk learning how to build cars and rockets by reading books (which he did, but the truth is more that he surrounded himself with trained professionals who could design and execute). In my mind, to be a trained professional requires:

* conscious practice over a long period of time (in order to see all the variations)

* correct feedback from work, peers or masters (community)

* access to the right tooling and body of knowledge. (guilds, journals, trade secrets)

In many areas of programming, these are achievable by a competent individual working alone, but sometimes these factors aren't there but appear to be, which can lead to false and misleading knowledge. In my own area of numerical mathematics, sometimes newbies try to roll their own linear solver (a seemingly easy exercise), not realizing the full body of research and knowledge there is behind handling corner cases. Also, there are myriad tricks-of-the-trade (guild secrets) that are really hard to learn from just reading code -- but that one can learn from osmosis/word of mouth if one works in a lab or research group. This is why it takes years of doctoral and postdoctoral studies to churn out a good numerical analyst.

The CS analogy would be someone rolling their own database from scratch. In these endeavors, the baseline of knowledge to get started is very low, but the real-world knowledge required to make the product robust needs to be built-up over time and by many competent people (through collaborations/teams/community).

I just wonder if security/auth/crypto products fall into these complexity categories, and perhaps it might not be easy--or indeed possible--to become a professional as an individual (without the right conditions in place), and that it might make sense to "stand on the shoulders of giants" as it were, which was your original point.

Classic hacker news. Ask for technical advice, get called incompetent.

If they asked a bunch of doctors how anesthesia worked, they were just about to go perform surgery at home, you'd expect the doctors to warn that it was a bad idea, no?

I suppose there may be a distinction between asking "how does anaesthesia work?" and "should I perform surgery at home?".

"A is to B as X is to Y", compares A and X, not A and B. It puts A in the context of B, as X is in the context of Y.

the comparison is between "how does anaesthesia work?" (A) and "how does auth work?" (X), relative to "about to perform surgery at home" (B) and "about to implement a service containing medical information" (Y).

The point is: he's about to do something big, and is asking a basic question. The real problem is not the basic question (auth) but the context he's doing it in.

If a nurse in training, in a classroom setting, asked about anaesthesia, it'd be fine. If they're a doctor, about to operate on a live patient, it's different.

I like the analogy in your final paragraph. I see hacker news as the nurse in the classroom setting.

Very few people in the field are competent to design security for health systems. No shame in being like most professional software developers. Immense shame in causing a life-altering breach because you couldn’t recognize the limits of your expertise.

I would say you are assuming a bit too much (but just a bit).

There isn't enough information on the question to tell wether the OP is a complete novice with no idea of the minefield he is getting it, or if he is somebody competent that wants a list of current practices to start further research.

<3 thank you. I have done a lot of research, and yes my healthcare knowledge is somewhat lacking. However my goal was to ask an open ended question to see what others are doing. Research is always key. I also do plan on hiring those with experience in the space as well

How do you know what expert is worth hiring?

Come on now, security isn't easy but it's not rocket science. If someone is competent enough to be developing applications they're certainly competent enough to do security correctly by researching first.

Before writing any code you should seek to deeply understand the problem space of authentication and authorization. HIPAA compliance is primarily an authorization problem, not an authentication problem. That is: both are important, but the unique set of challenges within the scope of HIPAA have to do with authorization of read/write access to data, not authentication.

Authentication asserts identities. Authorization asserts capabilities. This shifts and compartmentalizes the problem somewhat. Almost all interactive applications need to support robust authentication, but most applications do not require the sophisticated authorization restrictions HIPAA demands.

Whatever it is you choose, you should:

1) Use a mature, reputable library;

2) Use a library which provides the simplest possible interfaces for solving your needs in the most turnkey manner;

3) Engage with a reputable consulting firm specializing in HIPAA compliance and application security.

I would also recommend reading through as much information about Aptible's architecture and design ethos as possible. They have done an excellent job of navigating this problem space.

I roll my own using well supported libraries for the languages I work with. These libraries handle the gory bits and pieces where it's easy to make mistakes.

It's a split between using passwordless logins, or standard password authentication depending on who the target audience is.

I would never in a million years ever think about using a service like auth0. It's not just a huge privacy issue but now a critical component of your app depends on a third party service. Also I know of a few sites who use it and the user experience is really bad. It seems like every other time I access a site that uses it, I have to goto a third party auth0 screen to re-enter my login details (which are already auto-filled out by the browser).

Your user authentication flow is a very unique aspect of your site and it's also one of the first things your users see.

You should have full control over it because if your user's first impression is a slow loading non-intuitive user auth system that bugs them to login every few days they're probably going to look for a competing service. I know I would.

So... what are those libraries?

That depends on what language / framework you use.

With Flask I use Flask-Login. For Rails I still use Devise usually and with Phoenix I just use Plug.Session directly.

For authentication I use Auth0 on the free tier, with a passwordless setup that uses Google OAuth and Microsoft OAuth and allows fallback to emailing a code to a user. We store nothing more than the email address. The great thing about Auth0 is the separation it provides between the authentication layer and the web app, and how if you go down the SaaS route you can allow people to bring their own Auth0 accounts and configure their own bespoke authentication.

For authorization you are going to have to implement your own solution once you have an authenticated session. What someone can do always depends on your app and the functions you provide and so there is no nice third party solution to this. In my case I store the map of users to roles and what a role can do in a PostgreSQL app and cache there the answers to "which users are in a role" and "what can a role do"... user permission and roles changes are infrequent but flush the cache and so take immediate effect.

How do you implement that cache invalidation, assuming a multiple app server environment? Is it something like a separate redis server?

Yes, exactly: you use a fast data store such as redis or memcached.

As a user performs activities, this may involve a scenario requiring escalation or revocation of authorization roles and corresponding permissions. Invalidate cache at this moment. Lazy cache updated authorization info upon next request.

(I authored Yosai)

Ory Hydra


Open source OAuth / OpenID connect server

The docs, API and Docker images make it really easy to start developing against. Then the Docker images and database migration tools make it easy to deploy into our production infrastructure.

Also evaluating the other Ory tools like Keto, a policy engine.

The hackability of these is very attractive over closed services like Auth0.

Dex is another OSS option:


I'm currently building an internal Authentication service with Hydra. I have some questions about its use in production do you mind if I send you an email?

> The hackability of these is very attractive

Maybe not the best choice of phrasing

Since it's in the healthcare space, have you considered hiring a consulting team? Or at least a firm? What other Hacker News readers use for authentication is largely irrelevant since we don't usually work on HIPAA-compliant applications.

What other Hacker News readers use for authentication is largely irrelevant since we don't usually work on HIPAA-compliant applications.

I think this is a rather sweeping, and wrong, generalization.

Though the chattering masses on HN may not be HIPAA devs, that doesn't mean there aren't HIPAA devs here. I'm one of them.

Every time someone brings up an obscure topic on HN, there always seems to be a group of people who specialize in that topic that come out of the woodwork and have fascinating insights.

The largest number of commenters on HN seem to be Googlers, Facebookers, and one man bands. But there are plenty of, for example, Apple devs on HN. They just choose to keep the S/N ratio high.

It depends on what type of system you plan to support. If it's for a hospital setting featuring many different types of actors, roles, and constraints, this requires a greater level of sophistication.

Beware, authorization is an Alice in Wonderland rabbit hole where one may fall far deeper than one expected to.

A few years ago, I ported Apache Shiro from Java to Python, resulting in The Yosai Project: http://yosaiproject.github.io/yosai

It was a grueling but rewarding experience.

I honored Shiro in name and license, open sourcing everything and using Apache 2. I went even further than Shiro by adding two factor authentication workflow using totp and including starter modules for caching, data store, and integration with the web app I was using (pyramid).

If you choose to use python, or even just want something to learn from and reference, check out Yosai. I put a lot into this work to make it useful for others, entirely on my own.

I spoke with Tobias (podcast init) about the project some time ago: https://www.podcastinit.com/yosai-with-darin-gordon-episode-...

If someone else needs to use your API, please use OAuth/OIDC. I've seen countless hand-rolled (and insecure) authentication schemes. It's almost always a nightmare. If you plan on connecting this to multiple apps and partners, rolling your own will create a hellscape for everyone

If you use OAuth, anybody can connect using a standard library and "flow" across tons of languages. And you don't need to worry about screwing up the most security critical part of your API. It will save you countless headaches.

Some people bash OAuth because of JWT, but this overblown. Storing permissions in a token is the only sane way to do things if you end up with multiple services down the line (you will).

The whole revocation debate is a bunch of noise about nothing. Make the expiration interval fairly short, and if that isn't enough you can make a cache of revoked tokens that only needs to live as long as your expiration interval. This is still orders of magnitude more efficient than not storing permissions in the token and just as secure.

If you need to revoke all tokens in a breach scenario you just change your signing key. I recommend using SHA256 based signatures rather than public/private key since even though public/private is theoretically more secure, calculating signatures is quite a bit of overhead. If your backend is using a fast language like java/C#/Go, the majority of server CPU will be signing tokens.

Read about OAuth and ignore the haters. The design is well thought-out, secure, and efficient. If it was really that bad you wouldn't see most of the tech giants migrating to and using it. There's a lot of people that don't like it because they don't understand it well enough.

I was in a similar position, we started using Okta and eventually migrated to AWS Cognito. Rolling your own auth is a recipe for disaster unless you know what you are doing and really need to. Also, be prepared to be fairly locked in once you choose an auth provider, especially if you choose one that is fairly integrated into your ecosystem.

Why did you switched away from okta?

We use Lambda authorizers, how do you use cognito

You can create a custom auth flow using lambda and cognito; you can return a series of challenges and create a stateful flow using session tokens which results in a set of access, identity, and refresh tokens.

Alternatively you can use the auth code flow baked into lambda; if you have premium support make a case and someone can walk you through it :)

Edit: just read other comments pointing out the healthcare thing. Hire a professional.

Honestly, authentication is not that hard. There are many ways to do it, all with valid trade offs.

What you need to know is what your AUTHORIZATION story will be. Can anyone who can hit your API receive all data? Otherwise, you either need some kind of stateful access control or some kind of bearer token granting certain kinds of access. If the latter is simple enough for your use-case, then JWT, despite it's naysayers, might shine for you. Otherwise you can just use about anything since you'll be looking up what they can access in a DB anyway.

The open source ory ecosystem ( http://github.com/ory/ ) might have what you're looking for, but it's definitely for advanced usecases. I know a lot of people that worked with Auth0/Okta/AWS Cognito but got so frustrated by downtimes, bugs, and complexity that they moved away. But it is an option for rapid prototyping although I'd keep a "replace it" somewhere in my milestone planning. Another possibility is Keycloak which is very enterprise / java fullstack and quite complex to understand.

Most advice in the comments is pretty bad though. Stuff like "API Clients need bearer tokens" is completely backwards and pushed by marketing people from companies (Auth0, Okta, ...) that misuse open protocols (OAuth2, OIDC) as a way to legitimize the closed source saas approach they took. Along the lines "if it looks complex it looks secure because most people have no idea". It's actually very easy to use cookies (httpOnly, secure) with API clients and you're saving yourself so much complexity with refreshing tokens and all that stuff.

Yet another possibility for super rapid prototyping is: https://github.com/bitly/oauth2_proxy

edit:// I forgot KeyCloak, but it's also for advanced enterprise use cases (SAML, OIDC, Realms, ...) and (from what I've heard) with a steep learning curve and heavy.

>" Stuff like "API Clients need bearer tokens" is completely backwards and pushed by marketing people from companies (Auth0, Okta, ...) that misuse open protocols (OAuth2, OIDC) as a way to legitimize the closed source saas approach they took."

Can you elaborate on how they "misuse" them? I don't have any familiarity with those two companies, generally curious. Thanks.

I think the distinction is that, if you intend to have a publicly accessible API, tokens are preferred vs cookies. For your own mobile clients, doesn't matter

Why are tokens preferred over cookies for a publicly accessible API?

There are all sorts of cases where managing cookies is annoying when interacting with an API, like via curl. There might be other reasons as well, but making consumption easy is probably reason enough

"This product is going to need to be secure as it will be in the healthcare space (so think oidc)."

Considering most EMRs (or at least the ones with which I've interacted) don't go very far beyond a username/password combo, you're probably fine keeping things simple.

Generally, when in doubt, I'd strongly recommend using some existing auth library instead of trying to roll your own. It's not clear where exactly in the healthcare world your site will fit, but if you're aiming for hospitals, any hospital worth their salt is going to be using Active Directory or something similar, so you'll probably want to find something that can support offloading user identification in that direction (and fall back to username/password if the org doesn't yet have AD).

I don't know your specific jurisdiction, but at least in the US, as long as you're encrypting all your data (both at-rest and in-transit) and aren't doing anything egregiously stupid (plaintext passwords, single shared password for everyone, literally selling patient data on the Dark Web, etc.) you should have a pretty hard time violating HIPAA, and you'll already be on-par with most extant medical systems. Any further hardening on the authentication front (e.g. specific session management strategies) will just be icing on the security cake.

If you haven't already, I'd suggest reviewing NIST's guidelines for system security; most official HIPAA reference materials point toward NIST guidelines, and most hospitals will tend toward that direction as well.

If your plan is to connect other services then I'd suggest using LDAP for central authentication.

It can easily be connected to any API without much "glue". And most common open source services already support it as auth backend.

It's also easier to audit than any custom service you might concoct on your own because auditors already have experience with it through Active Directory.

LDAP is really overlooked by many and I think people are surprised by the amount of software that offers LDAP support.

i've stopped being surprised a few years ago that it's usually in the 'call us' price tier :)

I have been using JWT[1] since it's initial push as a mainstream authentication method. There's plenty of libraries that support it and make it easier to use[2]. There's even 0Auth[3] that is a company based around offering authentication services. Simply check for a valid token in the Authorization header, check out the introduction [4].

The library I've linked brings up a good point to make sure that you know the difference between decoding and verifying the token, but after that it's fairly plain sailing.

[1]: https://jwt.io [2]: https://github.com/auth0/node-jsonwebtoken [3]: https://auth0.com [4]: https://jwt.io/introduction/

We use a local version of UAA (https://docs.cloudfoundry.org/concepts/architecture/uaa.html) he behaves like a google or facebook authentication where the user concedes the permission to the apps using OAuth 2.0 protocol.

We have 10+ web clients, 2 mobile apps and one internet gateway. Spring Security make it very easy to integrate everything.

https://oauth.net/2/ Auth0 is the best documentation to learn about oauth: https://auth0.com/docs/protocols/oauth2

The best part of using Oauth is that you can change your authentication server (UAA, Auth0, etc) without changing your app code (only configurations)

If I do a MVP with React.js, my go to solution is Firebase. [0] Authorization is possible as well. [1] If I need to scale this MVP, I would eventually migrate to a self-hosted backend solution. In my case, it would be Node.js with a GraphQL interface that enables authentication with JWT [2]. Alternatives could be Passport.js and Auth0.

- [0] https://www.robinwieruch.de/complete-firebase-authentication...

- [1] https://www.robinwieruch.de/react-firebase-authorization-rol...

- [2] https://www.robinwieruch.de/graphql-apollo-server-tutorial/

I'd advocate against using third party services here. In general third party services and healthcare comes with a lot of hurdles and costs which you need to evaluate carefully. Even log data stored on a 3rd party server that might have some PII can be a violation of U.S. law unless they are a contracted BAA following all the same security and legal rules required of healthcare tech.

Overall just start with strong security in mind that will meet the U.S. healthcare security rules/laws, even if you aren't in the U.S., the basic principles are just focused around strong security. People can debate the specific methods, but I will argue using almost any third party service has potential problems for you with compliance. Yes, I agree and understand those services specialize around auth usually, and for most companies that is fine and even for some more fringe areas of healthcare that is fine. But take another viewpoint, that is because those companies secure so many disparate third parties their attack surface is huge compared to your own, and a vulnerability at their end may force you to do public disclosures. That alone isn't a sole reason not to use them, but do consider all the factors.

Also, in the U.S. you will likely (depending on specific type of product) need to deal with HIPAA and other similar acts (HITECH/TRUST etc). None of these are actually all that complicated if you take them into account early, although going back and adding them later can be a struggle.

Basic principles is secure everything, have timed (short interval) token expiration, have a global token expunge, and always err on the side of reauth over pass thru. Also, if you have many backend services, do not rely on a proxy authentication service to pass off requests. Force all services to validate the authentication of each request. Yes this is "expensive" in terms of extra cycles but it minimizes the risk considerably. Lastly, store trace and audit logs of everything you can imagine, all requests.

Check out https://github.com/sysgears/apollo-universal-starter-kit

Fullstack JS, Apollo GraphQL, numerous client types, batteries included. Has auth implemented plus a number of other things you will find useful in the graphql world.

If you are in healthcare, you may want to find an auth system like DEX that can talk to several backends like LDAP & Active Directory, as each hospital or institution will likely want to reuse what they currently have implemented. You'll need to hook into them. (This is on our roadmap)

I'm using a selfmade system atm as as excuse to learn the subject for a MEAN app, but I would use OKTA or some similar service if I was 'playing for keeps.' Just boring client side sessions using Mozilla's client-sessions library with bcrypt for passwords.

I'm finding it hard to get information above the 'follow these steps to use this library' level, yet beneath 'here's how to make a cryptosystem from scratch'. I'm about to read some RFCs unless I find a better intro resource.

All new apps I’m building are based on password-less auth [1] & bearer token. NoPassword is just so much more comfortable than having to remember passwords. And boy, people are really bad at coming up with passwords - you might have seen it this Christmas at your parents’. [1] http://notes.xoxco.com/post/27999787765/is-it-time-for-passw...

This works only on devices for which you simultaneously have email access. For instance Netflix on a home gaming console would fail miserably using this schema. See also https://www.troyhunt.com/heres-why-insert-thing-here-is-not-...

Truth about side-projects is most never get to the finish line. I'd like to encourage you to get there, and for that, you'll need to use a third party for auth, at least till you finish your API and seek feedback/scale. You can always roll your own after you've validated the viability of your API.

You can choose firebase or AWS Cognito. They offer readymade sever and client SDKs for a wide variety of targets. Firebase is always free, and AWS free for 50,000 users

They offer authorization based on roles

Roll my own. Passwords are stored as bcrypt hashes. Just use plain old cookies to store session IDs.

What about localstorage for storing a token instead of a cookie?

what about caring about things that matter?

What benefit would that have over a cookie? (Honest question)

Exactly! +1

Simplicity is also Security

No, everyone should use auth0, okta, cognito

Auth isn't rocket science - I'd encourage people to recommend well-guided DIY here for newer developers in a lot of cases, because foundational tech learning experiences are critical to growing as a developer.

There's value to many in SaaS offerings, but any decently-sized programming ecosystem has a crap ton of auth offerings. e.g. Devise [0], Dotnet Identity [1], Django Auth [2]. Authorization is the fiddly/annoying part. For authentication, a reasonably motivated developer can expect to have workable, secure password authentication going in a couple hours, as long as they don't try to invent their own encryption scheme.

[0]: https://github.com/plataformatec/devise [1]: https://docs.microsoft.com/en-us/aspnet/core/security/authen... [2]: https://docs.djangoproject.com/en/2.1/topics/auth/

Devise, if you’re building a Rails app.

Why do all API development tutorials always say "Never roll out your own custom auth"? I always found it weird

They say it because as a development exercise it is both seductively attractive and very dangerous. Furthermore, almost no one's authentication problems are unique or unsolved.

There is approximately never a good reason for a team to roll their own authentication solution from the base primitives unless that's both their core competency and their product differentiation. It offers virtually no upside for virtually certain downside.

What they said..

- Your authentication problems are not unique to you;

- The effort of implementing standards (whether it's front end like OAuth, OIDC, or SAML or back end like hashing) is a pain in the butt and easy to make bad choices;

- If your project is successful or as your requirements change over time, now you have to figure out how to add MFA, password resets, internationalization, address security audits, etc, etc.

Doing it yourself means you have 100% responsibility for everything when that is probably not your main skillset or really what you want to spend your time doing anyway.

Disclosure: I work for one of the companies mentioned in this thread.

Hey man, funny thing, I just completed your 3 courses on Lynda.com on the REST API learning path yesterday. I did the Design one, the Validation and Authentication one and the OAuth/OpenID one.

Good stuff.

Also, I have a question for you, is there a good place to reach you?

Thanks and great to hear. My email is in my profile. Feel free to drop me a note.

If you're not experienced at it, it's easy to make exploitable mistakes. Most of those have been identified, and avoided, in standard auth implementations.

We use SAML set up for our customers' identity providers (most of the time, Office365 Azure AD or Google Suite), from a NuGet package.

From there, it's a bearer token encoded in JWT format for ease of debugging. From another NuGet.

We also support username and public key authentication for our SFTP server.

We do support username and password if necessary.

and what do you use for authorization?

Custom rules on top of ASP.NET middleware.

We use Truedentity (https://www.truedentity.de) which is pretty robust with smartcard and palm vein pattern to make access secure. It uses EJBCA as backend for PKI and it's possible to configure redundancy with it.

Ignoring the body of your question and answering just the title, I'm developing https://accountd.xyz/ as a system for mixing and linking identities (email, twitter, github, trello, indieauth for now) and using that for https://trackingco.de/, https://listhub.xyz/ and https://sitios.xyz/.

In theory anyone can use it, it works automatically and you don't have to register or anything like that, but I don't know if people are interested in it.

I wonder why macaroons [1] are not even mentioned in this discussion yet.

They have all the upsides of cookies, but also can be narrowed down to be handed to third parties (good for APIs), caveats, and have a standardized and implemented [2] verification scheme.

I wonder why they don't see wider use. Do they have significant downsides?

[1]: http://hackingdistributed.com/2014/05/16/macaroons-are-bette...

[2]: https://github.com/rescrv/libmacaroons

Macaroons have their own issues, see my previous comment here: https://news.ycombinator.com/item?id=17879403

If you've got questions about them I'm happy to answer.

I'm mostly a backend developer (batch processing, systems integration etc) and it still amazes me there's no turney solution for different languages. I work in a large enterprise so we hook into our SSO for any kind of Auth for our web apps, though I haven't personally had to deal with this since my college days. If I remmebr.NET had something out of the box.

Anyways, when I dabbled a few years back, I uses stormpath but they closed down. Les Hazelwood, the creator or Apache Shiro works at Okta now... Which seems to provide enterprise security.

Can anyone comment on their experience with Shiro, or Okta in general, and if it would help OP.

U2F + a password or biometric.

What I wish is Apple and co would give the consumer choices. I am a low threat target; I don't need to enter a pw to unlock the SIP. So presenting a fingerprint + a hardware key would be a huge security improvement and a reasonable defense against 99.9999999% of the threats I face. Unfortunately, Apple assumes we're all being targeting by physical attacks from the NSA, decreasing my convenience level for a non-existant threat.

I'm in the same boat (SaaS app in the health sector) and opted for Auth0 after using it a fair bit in the finance sector with work. I'd say start with that and once your app outgrows it then you can roll out a solution of your own but I'd suggest you don't try to reinvent that particular wheel when you're starting out.

Auth0 for authentication, custom authorization.

I use meteor and graphql with accounts package and Alan's: roles packagr. It's pretty easy using meteor.

We did the bakeoff and AWS Cognito is the by far the cheapest. It's not as well documented but supports Open Id Connect just like all the other 3rd party auth solutions. The downside is that Cognito does not support a Resource Owner Password Flow (not recommended anyhow. see RFC6749).

Echoing the others telling you to hire a professional. If you have the money Okta has a professional services division that will set everything up for you. Use them. If you don't have the money us Auth.0. Don't try to handroll everything you will regret it.

If you have the budget for it just use auth0 or similar. You will likely spend less time wiring it up than coming up with a solution yourself.

Going off your question only, this will also likely end up being the most secure implementation for you (relying on a third party service)

> healthcare


Nooope. Not a good idea. OIDC is pretty damn complicated to implement as a server. And it doesn't help at all with anything around revocation etc. To make that possible, you have to add a load of extra work and you basically lose most of the benefits of using JWTs.

First, as others have said, if you don't know what you're doing on this, you have no business trying to secure health data. Bring in someone who does know what they're doing and then pentest it aggressively.

Second, I'd suggest you'd be better keeping it super simple. Just have a token in a table, refer to the token by its ID and then attach a 32 byte crypto random to it which gets checked before the existence of the token is acknowledged. Compare it with constant timing. If you don't know what that is, again, you shouldn't be doing it with health data - best to learn on a project with less sensitive info.

imho github.com/casbin/casbin is the go to solution when it comes to authorization, it supports ACL, RBAC, ABAC, it's optimized for performance, it's pretty language agnostic, backed by tencent

You can use meteor’s healthcare track, specifically designed for healthcare apps: http://clinical.meteorapp.com

All our users have client certs, so using then makes sense to me.

I have heard of keycloak , open source auth platform

any reviews ?

Currently integrating Keycloak as auth service for multiple existing projects. It's very advanced, many use cases and configurations are possible. We are in the middle of the migration but so far it hasn't been hard to integrate.

It's a rather large dependency so I won't recommend it for a single project with straight forward auth requirements.

There are seriously dangerous attitudes in this thread regarding what constitutes token invalidation. Hire someone skilled at secure auth architecture. You should have at least 2-3 senior engineers who are very well versed in this if this company is serious about doing anything related to protected patient healthcare data. One person's opinion should never end up being the only input for super critical decisions like you are trying to make.

Well, we build and use Scatter (http://get-scatter.com), which is both a desktop app and a JS and C# library. This allows you to use the EOS or Tron blockchains and their account systems. Our users have local access to remote accounts on those networks. It's pretty amazing software, and it is really fun building something totally new.

Using anything other than your own libs for your specific needs probably will lead to over engineering.

I'm using Keycloak for authentication and Vert.x for aauthorization besides the main use case.

in the healthcare space really helps. great detail for this question. honestly, not many people are good at asking questions.

you need an expert, someone that doesn’t need to ask this question.


Email or text message sent to customer and they just input the number they were sent to match.

This is called two factor auth. SMS is not considered a secure method of transport (mostly due to porting). It only solves opportunistic password compromise via password dumps.

In reality it solves far more than that for the average system for the average user. No on in reality is having their mobile number socially engineered to get into startup xs system. It's a good place to start until you are at a scale where you would have dedicated security engineers to work on the problem.

> No on in reality is having their mobile number socially engineered to get into startup xs system.

The problem is that you can't know whether that's the case at start, at some point in the future, or never. You'll only find out that your guess was wrong when you're breached which is never a good time for any service, particularly one covered by HIPAA. Besides, why intentionally implement a known-to-be-insecure second factor method? This is brand new code; there's no need to incur technical debt from day one. Which leads to:

> It's a good place to start until you are at a scale where you would have dedicated security engineers to work on the problem.

Except that day never comes for a variety of reasons. "Users already expect it so we can't remove that." "There are several dozen other security audits/features/fixes we need to make, let's prioritize those first." "What? SMS is insecure? Weird, never knew that."

Also, if we don't start making the decision now, before the second factor is ever implemented, to move away from using SMS as that second factor for the reasons it is known to be broken, when do we start?

It's not called two factor authentication. Two factor authentication is when you have two factors for authentication. This is just one..

something you have (phone) & something you know (password)

I only “know” the code it gave me for what, a few seconds? If at all... macOS now automatically copies them from texts and 1PW automatically copies TOTPs for relevant logins to the pasteboard. I think the “know” part is something _you_ create/control and use over many instances.

Where’s the second factor?

Once we lost the user account DB at a startup to database data loss and some other day, someone logged into mongodb and dropped all the tables and backups were not test before, so they did not work.

After that we no longer store these details.

We've been using amazon cognito with lambda authorizers.

That sounds like the problem is how you keep your database permissions and backups, not anything to do with auth.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact