Hacker News new | past | comments | ask | show | jobs | submit login

This question comes up all the time on HN. I'm one of a bunch of people on HN that do this kind of work professionally. Here's a recent comment on a recent story about it:


The short answer is: don't overthink it. Do the simplest thing that will work: use 16+ byte random keys read from /dev/urandom and stored in a database. The cool-kid name for this is "bearer tokens".

You do not need Amazon-style "access keys" to go with your secrets. You do not need OAuth (OAuth is for delegated authentication, to allow 3rd parties to take actions on behalf of users), you do not need special UUID formats, you do not need to mess around with localStorage, you do not need TLS client certificates, you do not need cryptographic signatures, or any kind of public key crypto, or really any cryptography at all. You almost certain do not and will not need "stateless" authentication; to get it, you will sacrifice security and some usability, and in a typical application that depends on a database to get anything done anyways, you'll make those sacrifices for nothing.

Do not use JWTs, which are an increasingly (and somewhat inexplicably) popular cryptographic token that every working cryptography engineer I've spoken to hates passionately. JWTs are easy for developers to use, because libraries are a thing, but impossible for security engineers to reason about. There's a race to replace JWTs with something better (PAST is an example) and while I don't think the underlying idea behind JWTs is any good either, even if you disagree, you should still wait for one of the better formats to gain acceptance.

One very important thing you didn't mention : you MUST force transport encryption (SSL/TLS) to be used (deny plaintext connections). This is because the bearer token can be stolen by eavesdropping and since it's a bearer token it can be re-used by anyone anywhere.

Also remember to time out the tokens: it is almost never a good idea to permit infinitely long login sessions (surprising how often I see this not done). Again remember to invalidate the token when the user changes their password.

I agree that OAuth is not necessary on its own but it can be appropriate if you are also supporting delegated authentication with various 3rd parties : make your own native auth just another OAuth provider.

You need transport encryption no matter what, not just for REST APIs. I'm just addressing the app design concerns.

Is it not true that Basic Authentication requires TLS due to transporting credentials as plain text but other variations encrypt the credentials at a higher level before being put on the wire? In that case why is TLS a necessity?

It's necessary because otherwise the encrypted credentials could be copied and used to access the service.

It's not a given that something like Diffie-Helman shared secret is going to put the actual secret on the wire during authentication, or that smart auth strategies similar to this one that don't directly transmit secrets are always going to be susceptible to replay attacks.

(I am a dog on the internet, so don't listen to me. I also heard that the best way to get a really good answer is not just to ask any old question, but to give a wrong answer...)

That's interesting, do you have an example or an article about how it would not be subsceptible to replay attacks?

I think the key is that you must have a shared secret in advance, which is probably something you won't have in most cases unless you're building a point-to-point encryption.

I'm afraid I don't have such an article, and I'm not an expert (just a dog right) but the article that explained Diffie Hellman in a way that made me feel like I was understanding it, you each get a paint color, and you have a pre-negotiated shared secret color. You mix the paint colors to send signals and you know what the colors look like when they are blended with the secret, because you've seen them before.

What's missing from this to make it safe from replay attacks? (It's obvious that if this is the whole setup, if you could observe the color transmitted, you could simply pass the color again if you wanted it to appear that the message was transmitted a second time.)

The answer I think, is a Nonce or IV (Initialization Vector.) I'm not particularly clear on how a nonce is different than an IV or if you would only ever use one or the other, or if you might use both in certain cases, or in all cases...

RE: Denying plaintext connections. Totally agree & great if your clients connect directly to hosting you have full control over. The biggest problem I've come across is that cloud services like CloudFlare & API gateways (Tyk for example) don't have the option (or at least I couldn't find it) to disable HTTP traffic. Plenty offer to redirect HTTP to HTTPS but I haven't been able to refuse HTTP traffic outright.

Does anyone have any recommendations for services that do offer this? (or where those options are in the named services)

If you can install an open source Tyk gateway in your stack, you can use TLS on the Tyk Gateway to refuse HTTP outright. However, this isn't a Tyk cloud feature at present.

Correction: if Tyk Cloud has a custom domain (CNAME) setup for you then it can ignore https - it’s not a self-service option though.

> Do not use JWTs, which are an increasingly (and somewhat inexplicably) popular cryptographic token that every working cryptography engineer I've spoken to hates passionately.

Can we get more intel behind why JWT is bad? I've always been told that as long as you explicitly check both the signature and that the signature algorithm type is what you expect, its fine. Libraries tend to make the mistake of not doing that second part for you, so implementations have to be aware of that.

The one concern I've always had is that even though they are stateless, most implementations end up making a db request to get info about the user anyway (i.e. their role and permissions), so the stateless advantage of JWT isn't as big as it is touted. You can embed that into the JWT, but then the token inevitably gets huge.

You can't prematurely expire or invalidate JWT tokens once created, unless you keep a database of tokens, and at that point you should just use sessions because that's basically what it is at that point: A session token with additonal data.

That doesn't mean JWTs are bad; it just means their use case is more restrictive. JWTs are designed for sessions; think Google API tokens that have a validity of 1 hour. If you're using them for anything longer than that, then you'll probably need to back it with a database so you can support revocation, and at that point JWTs make less sense because they're so large.

I can’t fathom what the problem could be when using private key encryption to create stateless tokens.

I do this to create bearer tokens without JWT.

Anyway, you can find a lot of his comments about JWT by searching ‘tctapek JWT site: ycombinator.com’

In the box at the bottom of the page, just type "author:tptacek jwt" (make sure you switch to "comments" mode).

HN search is much more efficient than Google for this.

If a web application communicates with it's back end via REST API, and that API is only meant to serve the web app and no other client, and if they communicate with each other on the same origin (http://myapp.com/api), will I need authentication on that REST API at all? Will disabling CORS be good enough?

Sorry if the answer is obvious...this is not my area of expertise.

It's very possible I completely misunderstood your suggestion, however in case I didn't.

If you're storing the key on the client (cookie or w/e) and in the database and solely using it to authenticate, aren't you going to run into timing attacks if you're using it for retrieval?

What I typically do is also store a unique identifier like email for the lookup and then use a random key for comparison / validation.

Yeah could the DB token lookup timings by itself be used to find a real token? It might be several layers deep and DBs are noisy, but I think it's still possible in theory. Could you get around this by only storing some hash of both the token and a DB secret?

Heck, I've been overthinking this. Thank you!

> There's a race to replace JWTs with something better (PAST is an example) and while I don't think the underlying idea behind JWTs is any good either, even if you disagree, you should still wait for one of the better formats to gain acceptance.

Aside from PAST, I recently have come across this[0] (called branca) as I too was looking for an alternative to JWT. This seems to be based on Fernet, but according to the maintainer of this spec it isn't maintained anymore.

[0] https://github.com/tuupola/branca-spec

Great answer. One thing I'd like to add is if you're using bearer tokens, make sure your API has an easy way to invalidate and regenerate them, as anyone with the bearer token has full access.

What do you mean when you say: "you do not need to mess around with localStorage"? Aren't you supposed to store this token on the client side (presumably using localStorage/cookies), and then include it in the request?

You seem to know what you're talking about, but i'm a bit confused. With JWT I just store the token in localStorage and then add Authentization Bearer header with the token. What's the recommended approach? To send username + token as part of the form data?

Great points! I use high-entropy random secrets as passwords in the Basic Authorization header, with their hashes stored in a database. I also use cookies to make the browser experience pleasant and secure. The cookies are based on a HMAC hash that uses a single server-side secret, a string representing the principal, and a timestamp. So the cookies work without needing server sessions.

HTTPS is mandatory of course, and caching successful authorizations help performance.

All of this seems inferior to just using a 128 bit random token drawn from /dev/urandom and stored in a database. If you see "HMAC" in an API authentication design, something weird and probably bad is happening.

I would say, store the hash of the token in the database, but that's my personal preference to add a bit of defense against timing attacks, insider stealing the token, or token database breach.

I don't think this buys you anything. Keep things simple.

I feel like picking a fight. What's the downside? It's super simple to hash them, and it prevents a read-only exploit from turning into a major catastrophe.

Example from something I built. If our db was used by an attacker, with all client API keys, they could go liquidate those accounts (place phone calls). Huge loss, and not far-fetched (this kind of attack happens daily and is profitable). With hashed API keys, nothing is possible. We don't even need to tell people to rotate keys. With plain keys, we'd need to freeze usage for people without e.g. IP address restrictions in place.

Read-only leaks happen all the time. Why not make sure they don't impact your clients API usage?

I'm not just trying to fight. It's a handful of trivially-validated lines of code that significantly mitigate the impact of a data leak. Seems like a super easy tradeoff.

Click the link in his comment.

I’ve frequently used the equivalent of HMAC_SHA512(long_secret,uid + ’|’ + timestamp) to generate a token on the server, which the client can retain and pass along requests, and can be verified on the server without persistence. I assume this is what you refer to as stateless authentication. While I agree that there are no real performance reasons to do so, it seems convenient to me every now and then. Is there a security reason to stop doing so?

Only if the long_secret was made public.

How do you revoke access to that token?

Well, you don’t, so if that’s a requirement you’ve got to do it some other way.

You can rotate the secret to invalidate all tokens.

No, you can't. That breaks all of your users, and so you'll rarely do it, even when it might be warranted. Don't engineer security countermeasures that you (a) might need to rely on and (b) will be afraid to use.

Good points. But for some types of apps, you might have groups of users (a company, team, municipality) that you might be able to afford the cost of "everyone log in again". Or you might be able to safely log out everyone after business hours (if in the same timezone).

I like your "keep it simple" approach here.

Can you confirm re: your recommendation for random "bearer token" auth, are you talking about just short-lived tokens that are the response to a regular user auth flow (ie login in one request, get a token, use that for subsequent requests in this "session" ala a session cookie) or do you (also) intend it for long-lived tokens used eg for machine accounts?

You can use short-lived tokens, or you can use long-lived tokens. A long-lived 128 bit secret is superior to a username/password, for reasons explained elsewhere on the thread. So if your short-term token scheme requires programs to occasionally deploy the root account's password (or really any password that a user had to come up with), it's flawed.

Right - I agree that deploying a human-used password is not a viable option.

I'm thinking more in terms of deviating from your described solution on storing keys (particularly long term ones), by storing them hashed (and thus require some kind of account identifier prefix in the Bearer token string).

Could you clarify why (or when) to use bearer tokens instead of Basic Authentication (i.e. sending username and password) with every request? Is it that if the server is compromised, only passwords from future logins can be stolen rather than those of everyone who performs a request? The cost of checking the hash? Other reasons?

The purpose of a password is to have an authenticator that is human-readable/writeable and, ideally, human-memorable. Ideally, a strong password is stored only in a person's head; in the very worst (and unfortunately common) case, it's also stored in a password manager.

API authentication doesn't have the memorability problem, because the password has to be stored somewhere the client program can reach it. But, as you can see, it does have the storage problem, which means you need to account for the fact that it might be compromised in storage.

So you need a separate credential for API access. Since it's only going to be used by computers, there's no purpose served by making it anything other than a computer-usable secure credential; hence: a 128 bit token from /dev/urandom.

Once you have a 128 bit token, there's also little purpose served by forcing clients to relay usernames, since any 128 bit random token will uniquely identify an account anyways.

It was my impression that the token is obtained by supplying username and password to a login API, through which it can also be replaced when it expires. Now it seems that I was wrong. How do you suggest the token be managed?

You can do that, or you can just have a page in the application where the user manually fetches a new API token, which is a pretty common UX for this.

You almost certain[ly] do not and will not need "stateless" authentication

Unless you get Bill Murray to run into people on the street or crash their all-hands meetings and tell them this, no one will believe you. Or at least, it's worth a try since nothing else seems to work, as seen in thread.

My plan is just to keep repeating it over and over again so that more people will see the words arranged that way in a sentence: "you don't need stateless authentication; it's usually not a good thing".

That is sometimes what it takes. Appreciate the additional explanation you’ve put into other comments from this thread. Will most likely be referring back to this one when I get tired of repeating similar points. ;)

Key quote here is

> Do the simplest thing that will work:

For many a long, a randomized bearer token will do. Depending on the type of data you expose via the API (example - financial data, PII) this may not be sufficient for your security team or auditors.

I run security teams for startups with my current firm, Latacora, which is me and 5 other veterans of security firms. Our clients engage with financial services and with regulated environments (like HIPAA/HITECH and the standards and practices of large health networks). Before that, I founded a company called Matasano, which for almost 10 years was one of the largest software security firms in North America. Unlike at Latacora, where our clients are all startups, Matasano's clients ran the gamut from startups to big tech firms to international banks, trading exchanges, utilities, and pharmas.

With the exception of the military, which I on principle won't work with, there's probably no regulatory or audit regime I haven't had experience with.

I say all this as lead-up to a simple assertion: I have never once seen an auditor push back on bearer-token API access. It's the global standard for this problem. If you knew how, for instance, clearing and settlement worked for major exchanges, you'd laugh at the idea that 128 bit random tokens would trip an audit flag.

tl;dr: No.

Why won't you work with the military? (genuinely asking)

In case tptacek doesn’t respond, I’d like to give a (hopefully helpful) answer in the aggregate sense: many members of the information security industry refuse to work with the public sector (read: military and intelligence agencies) for reasons of personal ethics. They typically dislike the privacy-antagonistic ends to which they consider their skills, software or inventions would be used. This is particularly the case with many cryptographers, who could walk onto jobs or lucrative contracts with the NSA, but who would never even consider it.

I haven’t spoken to tptacek about this directly, so I want to make it clear I can’t elucidate his specific concerns. But the broad strokes of his principles are very common in the industry, and typically stem from a disagreement in how the government approaches security (philosophically speaking).

Just a brief addition to this as a user of APIs. Allow me to create a new token and have two tokens for the same account. This makes rotation of tokens without loss of service possible :)

The downside to allowing multiple tokens is that in the real world, people will create new tokens for new deployment environments, but never delete the old ones, which will inevitably end up on Github somehow.

agreed, but its a solvable problem.

e.g. You can request a second token, but doing so immediately sets the currently active token to expire and be deleted in X days.

Sorry, I made it sound like I think multiple tokens is an illegitimate choice. It's not; I just think, be aware of the tradeoffs and keep things as simple as they can possibly be.

Don't major companies like Google use JWTs?

Major companies like Google do all sorts of dumb things, and, equally importantly, have gigantic well-funded security teams triple-checking what they do, so that the pitfalls in these standards are mostly an externality to them.

(The actual answer is: I have no idea what Google does with JWTs.)

> have gigantic well-funded security teams triple-checking what they do

Yes! This is the answer to almost every "Well, it works for Google..." that comes up.

Alice: "It works for Google!"

Bob: "Sure, but how many PhDs does Google have on payroll managing it?"

Google has the resources, meaning dollars and reputation, that if they want to do something, they can hire anyone they want to make it possible. They frequently hire the authors of the programming languages and environments they use (that weren't already invented in-house), who can then customize everything to fit Google's needs just so.

Assuming you're a normal mortal corporation, getting the inventors of your platforms on board and committed to your problems specifically is no trivial matter, and you don't have an army of bona fide, credentialed computer scientists on payroll to patch any intervening rough spots, so "Google does it" is not really applicable.

With the right choice of algorithm, and if you control the code which interprets that JWT (whitelisting the alg and choosing the right library), I don't see a reason why JWT would be insecure.

It's absolutely true that if you do everything correctly, a JWT implementation can be secure. Generally, in crypto engineering, we're looking for constructions that minimize the number of things you need to get right.

Yeah this is the problem with crypto security people they are one dimensional. JWT has the benefit of allowing disconnected services to send each other information through the front end.

Which minimizess the number of things you need to get right or in your words equals more secure.

Designing a token that can be validated instead of looked up. (Design/Implement once)


maintaining, updating and monitoring a set of firewall rules so that app-servers in network zone x and y can make call backs to a database in network zone z.(design many implement many)

There are a ton of great reasons to use JWT at scale. As with anything use case is important.

I disagree almost absolutely with all of this but am comfortable with how persuasive our respective comments on this thread are or aren't to the rest of HN. I'd rather not re-make points I've already made.

That's also an argument for just using Kerberos over the Internet. And I'm not sure that's a good idea.

I'm almost certain it's a bad idea if that means rolling your own Kerberos implementations in php, javascript and golang in order for your various back-end to speak to your various front-ends.

But sure, leverage secret-key crypto and tickets in your own implementation in a way that's more secure than Kerberos.

Or, use a solution that's simple enough any weakness is fairly obvious.

Fair point.

The answer would be yes with a rider. It would vary across the use cases and security requirements.

Do you have problems of the same complexity as google?

No. But OP said that JWT is insecure, yet it seems widely adopted by top tech companies.

Signed REST requests are good too, more secure. And what’s wrong with delegated auth?

Signed requests are not in general more secure:

* The implementation of cryptographic "signing" (really, virtually never signing but rather message authentication) is susceptible to implementation errors.

* The concept of signing is susceptible to an entire class of implementation errors falling under the category of "quoting and canonicalization". See: basically every well-known implementation of "signed URLs" for examples.

* Signed URLs have to make their own special arrangements for expiration and replay protection in ways that stateful authentication doesn't, either because the stateful implementation is trivial or because stateful auth "can't" do the things stateful auth can (like explicitly handing off a URL to a third party for delegated execution).

Stateless authentication is almost never a better idea than stateful authentication. In the real world, most important APIs are statefully authenticated. This is true even in a lot of cases where those APIs (inexplicably) use JWTs, as you discover when you get contracted to look at the source code.

Delegated authentication is useful situationally. But really, there aren't all that many situations where it's needed. It's categorically not useful as a default; it's a terrible, dangerous default.

Every time I read a API that uses signed/authenticated requests (AWS, Let's Encrypt ACME) I wonder exactly the same thing - why is this needed in the first place? If TLS guarantees lack of replays it seems to me like signed requests just protect their own complex infrastructure from reusing the same request twice...

A lot of text for your argument which is x isn't secure. Not very compelling.

Signed rest requests ensure that auth tokens can not be leaked as each request is individually signed by a private key.

Your extreme example btw is hyperbolic. Providing signing sample code to clients is pretty typical

I'm explaining where I'm coming from as a courtesy. I am also comfortable with the number and kind of HN readers who would simply take my argument as-stated without justification: "don't do signed URLs if you can get away with bearer tokens".

>susceptible to implementation errors.

So just like anything else you might want to implement if you do it wrong its insecure.

So really what everyone should do are PAKE schemes based on isogenies, because then we're all post-quantum secure, and, after all, it's only insecure if you do it wrong.

There's nothing wrong with delegated auth, but in this situation you're not doing any delegation.

What's so bad about JWT's? The cryptographic protocol is sound.

People write things like "the cryptographic protocol behind JWT is sound" and I always wonder where those assertions come from. Do you just think it must be, because none of the people you talk to say it isn't?

More to your original point, why add the complexity before it is required.

Let the complexity of the solutions incrementally grow with the complexity of the the problem being solved.

There is nothing wrong with JWT, implementing them requires some thought so that you don't leak sensitive info as well as configuring your backend properly.

Large swaths of the internet love to hate on JWT but its a major feature in oauth2 and is in use all over the place as decentralized APIs have become more commonplace.

Do you recommend signing requests?

No. The reason to sign requests is the same as the reason to support OAuth: so that the owner of the account can sign a request and give it to someone else to execute --- delegated authentication. Signed requests are finer-grained than OAuth is, but OAuth is much simpler and is the industry standard at this point. Don't do either thing until you absolutely need it, but then, start with OAuth.

Signed requests have burned a bunch of applications, more than have been burned by OAuth.

Thanks for the response!

My thinking was that you might sign requests so that a request that was intercepted or inadvertently logged would not contain sufficient credentials to authorize arbitrary other requests for the indefinite future. It sounds like you do not consider that a significant enough issue to justify the complexity involved.

Good point: Credentials must not be logged. The easiest way to achieve this is to use HTTP basic auth for the token because web server infrastructure already knows not to log that, or a header OAuth2 style.

No. You need to be securely using TLS anyways. Signed requests are hard to get right (Google "canonicalization vulnerabilities"). That's not a good tradeoff.

Signed requests were also invented when the transport connection was in the clear: if the request were not signed then it could be modified in transit by an attacker. These days all sessions are encrypted (SSL/TLS) and so this concern doesn't exist (or doesn't exist if you trust the transport).

The AWS API runs over TLS, and uses signed requests.

Perhaps because it was originally designed for use without TLS? Request signing was pretty much ubiquitous 10 years ago.

I'm not from Amazon but I'd guess they want to protect the request from being replayed inside their own systems.

more likely it's so they don't have to have a more convoluted process where they call out to requesting service to verify RQ & all which that entails (on both sides).

man, reading this comment was like a breath of fresh air.

we've been collectively brainwashed to reach out for AWS for almost any dev related work.

Also refreshing that a $5 DO/Linode box can do everything AWS can without the learning curve.

>>> The short answer is: don't overthink it. Do the simplest thing that will work: use 16+ byte random keys read from /dev/urandom and stored in a database. The cool-kid name for this is "bearer tokens".

Please don't reinvent the wheel and use a guid.

A guid is a random number generator to avoid collisions.

If you are getting collisions from 128 bit (and up) numbers coming out of a system CSRNG, your service is likely to be meaningfully affected by bigger problems like plate tectonics and lunar orbital decay.

Not all GUIDs are random, and not all random GUIDs are cryptographically random: https://en.wikipedia.org/wiki/Universally_unique_identifier#...

GUIDS are NOT random. They are unique (the 'U' in GUID).

Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact