The short answer is: don't overthink it. Do the simplest thing that will work: use 16+ byte random keys read from /dev/urandom and stored in a database. The cool-kid name for this is "bearer tokens".
You do not need Amazon-style "access keys" to go with your secrets. You do not need OAuth (OAuth is for delegated authentication, to allow 3rd parties to take actions on behalf of users), you do not need special UUID formats, you do not need to mess around with localStorage, you do not need TLS client certificates, you do not need cryptographic signatures, or any kind of public key crypto, or really any cryptography at all. You almost certain do not and will not need "stateless" authentication; to get it, you will sacrifice security and some usability, and in a typical application that depends on a database to get anything done anyways, you'll make those sacrifices for nothing.
Do not use JWTs, which are an increasingly (and somewhat inexplicably) popular cryptographic token that every working cryptography engineer I've spoken to hates passionately. JWTs are easy for developers to use, because libraries are a thing, but impossible for security engineers to reason about. There's a race to replace JWTs with something better (PAST is an example) and while I don't think the underlying idea behind JWTs is any good either, even if you disagree, you should still wait for one of the better formats to gain acceptance.
Also remember to time out the tokens: it is almost never a good idea to permit infinitely long login sessions (surprising how often I see this not done). Again remember to invalidate the token when the user changes their password.
I agree that OAuth is not necessary on its own but it can be appropriate if you are also supporting delegated authentication with various 3rd parties : make your own native auth just another OAuth provider.
(I am a dog on the internet, so don't listen to me. I also heard that the best way to get a really good answer is not just to ask any old question, but to give a wrong answer...)
I'm afraid I don't have such an article, and I'm not an expert (just a dog right) but the article that explained Diffie Hellman in a way that made me feel like I was understanding it, you each get a paint color, and you have a pre-negotiated shared secret color. You mix the paint colors to send signals and you know what the colors look like when they are blended with the secret, because you've seen them before.
What's missing from this to make it safe from replay attacks? (It's obvious that if this is the whole setup, if you could observe the color transmitted, you could simply pass the color again if you wanted it to appear that the message was transmitted a second time.)
The answer I think, is a Nonce or IV (Initialization Vector.) I'm not particularly clear on how a nonce is different than an IV or if you would only ever use one or the other, or if you might use both in certain cases, or in all cases...
Does anyone have any recommendations for services that do offer this? (or where those options are in the named services)
Can we get more intel behind why JWT is bad? I've always been told that as long as you explicitly check both the signature and that the signature algorithm type is what you expect, its fine. Libraries tend to make the mistake of not doing that second part for you, so implementations have to be aware of that.
The one concern I've always had is that even though they are stateless, most implementations end up making a db request to get info about the user anyway (i.e. their role and permissions), so the stateless advantage of JWT isn't as big as it is touted. You can embed that into the JWT, but then the token inevitably gets huge.
I do this to create bearer tokens without JWT.
Anyway, you can find a lot of his comments about JWT by searching ‘tctapek JWT site: ycombinator.com’
HN search is much more efficient than Google for this.
we've been collectively brainwashed to reach out for AWS for almost any dev related work.
Also refreshing that a $5 DO/Linode box can do everything AWS can without the learning curve.
Sorry if the answer is obvious...this is not my area of expertise.
If you're storing the key on the client (cookie or w/e) and in the database and solely using it to authenticate, aren't you going to run into timing attacks if you're using it for retrieval?
What I typically do is also store a unique identifier like email for the lookup and then use a random key for comparison / validation.
You seem to know what you're talking about, but i'm a bit confused. With JWT I just store the token in localStorage and then add Authentization Bearer header with the token. What's the recommended approach? To send username + token as part of the form data?
Aside from PAST, I recently have come across this (called branca) as I too was looking for an alternative to JWT. This seems to be based on Fernet, but according to the maintainer of this spec it isn't maintained anymore.
HTTPS is mandatory of course, and caching successful authorizations help performance.
Example from something I built. If our db was used by an attacker, with all client API keys, they could go liquidate those accounts (place phone calls). Huge loss, and not far-fetched (this kind of attack happens daily and is profitable). With hashed API keys, nothing is possible. We don't even need to tell people to rotate keys. With plain keys, we'd need to freeze usage for people without e.g. IP address restrictions in place.
Read-only leaks happen all the time. Why not make sure they don't impact your clients API usage?
I'm not just trying to fight. It's a handful of trivially-validated lines of code that significantly mitigate the impact of a data leak. Seems like a super easy tradeoff.
Can you confirm re: your recommendation for random "bearer token" auth, are you talking about just short-lived tokens that are the response to a regular user auth flow (ie login in one request, get a token, use that for subsequent requests in this "session" ala a session cookie) or do you (also) intend it for long-lived tokens used eg for machine accounts?
I'm thinking more in terms of deviating from your described solution on storing keys (particularly long term ones), by storing them hashed (and thus require some kind of account identifier prefix in the Bearer token string).
API authentication doesn't have the memorability problem, because the password has to be stored somewhere the client program can reach it. But, as you can see, it does have the storage problem, which means you need to account for the fact that it might be compromised in storage.
So you need a separate credential for API access. Since it's only going to be used by computers, there's no purpose served by making it anything other than a computer-usable secure credential; hence: a 128 bit token from /dev/urandom.
Once you have a 128 bit token, there's also little purpose served by forcing clients to relay usernames, since any 128 bit random token will uniquely identify an account anyways.
Unless you get Bill Murray to run into people on the street or crash their all-hands meetings and tell them this, no one will believe you. Or at least, it's worth a try since nothing else seems to work, as seen in thread.
> Do the simplest thing that will work:
For many a long, a randomized bearer token will do. Depending on the type of data you expose via the API (example - financial data, PII) this may not be sufficient for your security team or auditors.
With the exception of the military, which I on principle won't work with, there's probably no regulatory or audit regime I haven't had experience with.
I say all this as lead-up to a simple assertion: I have never once seen an auditor push back on bearer-token API access. It's the global standard for this problem. If you knew how, for instance, clearing and settlement worked for major exchanges, you'd laugh at the idea that 128 bit random tokens would trip an audit flag.
I haven’t spoken to tptacek about this directly, so I want to make it clear I can’t elucidate his specific concerns. But the broad strokes of his principles are very common in the industry, and typically stem from a disagreement in how the government approaches security (philosophically speaking).
e.g. You can request a second token, but doing so immediately sets the currently active token to expire and be deleted in X days.
(The actual answer is: I have no idea what Google does with JWTs.)
Yes! This is the answer to almost every "Well, it works for Google..." that comes up.
Alice: "It works for Google!"
Bob: "Sure, but how many PhDs does Google have on payroll managing it?"
Google has the resources, meaning dollars and reputation, that if they want to do something, they can hire anyone they want to make it possible. They frequently hire the authors of the programming languages and environments they use (that weren't already invented in-house), who can then customize everything to fit Google's needs just so.
Assuming you're a normal mortal corporation, getting the inventors of your platforms on board and committed to your problems specifically is no trivial matter, and you don't have an army of bona fide, credentialed computer scientists on payroll to patch any intervening rough spots, so "Google does it" is not really applicable.
Which minimizess the number of things you need to get right or in your words equals more secure.
Designing a token that can be validated instead of looked up. (Design/Implement once)
maintaining, updating and monitoring a set of firewall rules so that app-servers in network zone x and y can make call backs to a database in network zone z.(design many implement many)
There are a ton of great reasons to use JWT at scale. As with anything use case is important.
But sure, leverage secret-key crypto and tickets in your own implementation in a way that's more secure than Kerberos.
Or, use a solution that's simple enough any weakness is fairly obvious.
* The implementation of cryptographic "signing" (really, virtually never signing but rather message authentication) is susceptible to implementation errors.
* The concept of signing is susceptible to an entire class of implementation errors falling under the category of "quoting and canonicalization". See: basically every well-known implementation of "signed URLs" for examples.
* Signed URLs have to make their own special arrangements for expiration and replay protection in ways that stateful authentication doesn't, either because the stateful implementation is trivial or because stateful auth "can't" do the things stateful auth can (like explicitly handing off a URL to a third party for delegated execution).
Stateless authentication is almost never a better idea than stateful authentication. In the real world, most important APIs are statefully authenticated. This is true even in a lot of cases where those APIs (inexplicably) use JWTs, as you discover when you get contracted to look at the source code.
Delegated authentication is useful situationally. But really, there aren't all that many situations where it's needed. It's categorically not useful as a default; it's a terrible, dangerous default.
Signed rest requests ensure that auth tokens can not be leaked as each request is individually signed by a private key.
Your extreme example btw is hyperbolic. Providing signing sample code to clients is pretty typical
So just like anything else you might want to implement if you do it wrong its insecure.
Let the complexity of the solutions incrementally grow with the complexity of the the problem being solved.
Large swaths of the internet love to hate on JWT but its a major feature in oauth2 and is in use all over the place as decentralized APIs have become more commonplace.
Signed requests have burned a bunch of applications, more than have been burned by OAuth.
My thinking was that you might sign requests so that a request that was intercepted or inadvertently logged would not contain sufficient credentials to authorize arbitrary other requests for the indefinite future. It sounds like you do not consider that a significant enough issue to justify the complexity involved.
Please don't reinvent the wheel and use a guid.
A guid is a random number generator to avoid collisions.
The API client should this token in every request that requires authentication, often in the header as `Authorization : Bearer 123TheToken456`.
If DB performance becomes a problem (or you want to expose signed session metadata) consider using JWT to provide session validation with the request itself. The downsides of JWT are that its often used to hold secret values (dont do this), or is a few kilobytes big which makes all requests slow, or stupid mistakes in signing and session validation that make it very insecure like allowing any request to just specify false permissions.
Does it keep a copy somewhere and check against it on every request?
Yes. The last time I did this we checked the X-Token header on every request, if it didn't exist or there were multiple we replied 401. If only one was there we checked a DB table of active tokens, if it wasn't there or had expired we replied 401. If it was there but wasn't associated with a role that had access to the requested resource, we replied 403. If it was not expired and had access we continued with the request.
As soon as you get away from "check authentication on every request" your attack vectors increase. As a bad actor, I no longer need to bypass your authentication, I just need to bypass whatever system you have in place to decide whether or not to authenticate me. That's generally going to be easier.
Yep, a store like Redis that has automatic row expiration is what I have used in the past. Most clients will often be bursty in their requests so a simple few second cache on API after verifying the token can also be useful/performant.
I don't even store role information in them, since authorization checks are performed on the server anyway.
If the client needs to know what a user is allowed to do with a resource (so it knows not to display certain buttons, etc.) I have the client do an OPTIONS call (with the token) to see what methods are allowed.
Lately, I've been thinking about replacing the whole JWT scheme with simple bearer tokens stored in the database, mostly because it would make revocation simple and I can't think of anything I would lose by giving up JWTs (a little storage space in the database?), and I don't think switching the type of bearer token I'm working with will actually be very painful implementation-wise. You know what, I'm adding a task to my backlog...
If you want to be completely stateless, you need to send the authorization info on each request, probably as basic auth headers.
This is actually simpler than implementing the Bearer token, for both client and server, but requires that the client retain the non-expiring username and password, rather than retaining the expiring, session-specific Bearer token.
IMO, this makes this simpler solution a non-starter. It becomes way too easy to leak credentials.
You also lose any method to force a re-authentication. With a token, I could expire with no activity for an hour and allow it to be good for a max of 2 hours.
 Users have a bad habit of just leaving computers. With a token, the worst case is someone has a short lived access to to something. With a user/pass left on the client worst case now becomes use/pass taken.
But most clients store their user/pass in their browser anyways so I'm not sure it's a security win for preventing credential loss.
You don't lose re-auth. The master system issuing API keys can revoke keys, too.
But anyways maybe we're talking about different contexts because I don't understand the scenario you're describing.
JWT is also poorly specified (no protocol under any circumstances should use negotiation, it does not support revocation, and it has been hammered home by the best security folks I know that public key cryptography is what you do when you don't have any other choice) and dangerous to use. Avoid it. Do the simplest thing that can possibly work. That's a session token. If, in the far and unlikely future, you are so successful that a single database call is so harmful, then you have the money to hire someone who doesn't have to Ask HN this question.
Also, mixing credentials into URL does not feel like a good separation of concerns, e.g. URLs are often logged and analyzed in separate logging/monitoring/analytic tools, so there is a bigger risk to have credentials leaked over some side-channel.
I do plan to go back to Passport sometime this year. The number of Oauth providers is nearly overwhelming - too much to ignore. But also daunting for the first-time student.
Doing it that way, Passport doesn't require an ORM at all. You'll need to obviously provide a way to auth a user and verify a token, but that's then up to you.
Now, if you want to actually use OAuth it can get complicated because the flow.
Using well-developed hashing schemes like pbkdf2 (which the auth snippet uses) or bcrypt (another good and common option) and storing the output is not rolling your own crypto. Writing your own hash function would be rolling your own crypto.
People talk about not rolling your own crypto because crypto is very, very sensitive to even very small errors, in ways that other code is not. Writing your own authentication using a well-known strong hashing scheme (which handles salting and verifying passwords in a timing attack resistant way) is much less likely to have vulnerabilities that aren't obvious.
To me, passportjs might be useful if you need to plug into 3rd party auth APIs, but I don't really see the point. Authentication is a core part of your application and you should always know exactly how it works.
If you can't store an authn secret with confidence, how can you do anything with confidence?
I will concede, however, that the most basic forms of authentication that I've used are so close to the metal that they're usually already built into whatever you're using to do communication.
I would consider more complicated solutions only if you first come to conclusion that these simple things are not fit for the purpose.
True that some fancy token based solution may reduce database load, but if the API is doing something useful then that one primary key lookup and potentially the password hashing won't be a show stopper. Drawback with tokens and skipping the DB check is that you can't simply kill a client behaving badly. With API key you can just change the key and requests stop immediately (with MVP product this might be an issue, since maybe you have decided to add rate limits etc later).
You can always introduce other forms of authentication later. I have a slight preference for basic/digest auth as the secret isn't part of the URL, and therefore not cached/logged by any network equipment.
Does basic auth works if the client is a browser and you want to have a nice login screen?
You can solve this with a token/user blacklist. There are desirable (and undesirable) characteristics of using a blacklist instead of a whitelist, but you don't lose this capability.
Do the simplest thing that will work.
This is a complex distributed systems topic, and there are applications for both.
"I can access this thing" are totally not that for the 99.9% case. Do the simplest thing that will work--and the simplest thing so very rarely involves public-key cryptography!
* Public data API (no notion of user account, e.g. weathers API, newspapers API, GitHub public events, Reddit /r/popular, etc): use an API key. The developers must create an account on your website to create their API key; each API call is accompanied by the API key in the query string (?key=...)
* Self-used API for AJAX (notion of user account, e.g. all the AJAX calls behind the scenes when you use Gmail): use the same user cookie as for the rest of the website. The API is like any other resource on the website except it returns JSON/XML/whatever instead of HTML.
* Internal API for micro-services: API key shared by both ends of the service. There can be a notion of user accounts, but it's a business notion and the user is an argument like any other. If possible, your micro-services shouldn't actually be accessible on the public Internet.
* API with the notion of user accounts intended for third-parties (e.g. Reddit mobile app where the users can authorize the app to use their Reddit account on their behalf): OAuth2
A username and a password is so much simpler.
For APIs, it should be more manageable but many places stumble with key management and a lot of developers were resistant to learning enough about the tooling to do things like manage test instances.
The problem is that this isn't really true: it's more like this:
1. You go through a tedious and convoluted process to get
the certificate, which requires using a largely-ignored browser feature which is now deprecated: (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ke...). Even when it was implemented, the UI is not great – e.g. https://www.instantssl.com will fail if you fill out the form too quickly before the browser has finished generating a key.
2. Wait for the email to arrive and follow the retrieval process to get the certificate. Then follow the clunky UI to install it. You'll be told that it's really important to back it up but e.g. Firefox won't give you any instructions about where to even start to do that.
3. You then need a non-trivial amount of work to export the private key and certificate and install it on all of your devices, which is another process where the UX was apparently never considered seriously at any point over the last 20 years.
5. Every time you visit a site or send an email, you now have to select which key you want to use.
6. Every year, repeat the process starting at step 1.
Don't get me wrong, I'd love for this to be available and am still somewhat amazed that after however many years nobody has made a serious effort to improve the experience. It'd be really nice to have a LetsEncrypt-style effort to remove the warts from this process so it's approachable for normal people without a heavy support pool.
This is another area where I wish Mozilla hadn't prematurely killed Persona as it'd be really nice if there was a service which would allow you to associate different client certificates with a single user identity so private keys never needed to leave the device, and register things like tokens. A U2F-style focus on the user-experience would be really nice: once a year when you try to login it redirects you to a page which says “Enter your password and tap the token if you want to keep using it" and refreshes the certificate with no other ceremony.
Is there a reason why the server couldn't send the certificate back to the browser via HTTPS?
> You then need a non-trivial amount of work to export the private key and certificate and install it on all of your devices,
Would it not be better to just use a different key for each device? That is, repeat steps 1 and 2 for every device you plan to use?
> Every time you visit a site or send an email, you now have to select which key you want to use.
Could the browser remember which website the client certificate was used for? If so, then the user won't have to make the selection more than once.
> Every year, repeat the process starting at step 1.
Outside of a device getting compromised, is there a good reason for updating certificates more often than once every 5 years?
> It'd be really nice to have a LetsEncrypt-style effort to remove the warts from this process so it's approachable for normal people without a heavy support pool.
I'm still doing more research on this, but what did the <keygen> HTML element lack that the process used by Let's Encrypt provide?
> This is another area where I wish Mozilla hadn't prematurely killed Persona as it'd be really nice if there was a service which would allow you to associate different client certificates with a single user identity so private keys never needed to leave the device
Shouldn't the private key be something that's associated with the browser? That is, when you install the browser, a private key is generated and used for all certificate signing requests. I think the process could be extended to add additional browser instances for a given account on a website. For example, you could take the CSR from the other device and use your first device to send it to the server, get the certficate and then copy it back to your other device.
1. Click on link to sign up for new account on news.ycombinator.com
2. Enter information for the account including a CSR that your browser generates for you
3. Submit the information to the server
4. Server servers another page that confirms account creation and includes the certificate
5. The browser provides a way to import that certificate to a client certificate store it maintains
6. Next time you visit the website, your browser knows to use that client certificate
Registering other browsers would require sending the CSR via the first registered browser and copying the certificate that the server returns to the second browser.
Of course, this would be for websites that don't have a need to verify your identity (like reddit or Hacker News). For banks, or credit cards, there would have to be a way to verify the identity of the person signing up for an account.
> Is there a reason why the server couldn't send the certificate back to the browser via HTTPS?
The public implementations are generally trying to verify current ownership of the specified email address. That's part of why I wish this could be linked to a third-party which already does that so users don't have to repeat the process as often.
> > You then need a non-trivial amount of work to export the private key and certificate and install it on all of your devices,
> Would it not be better to just use a different key for each device? That is, repeat steps 1 and 2 for every device you plan to use?
It would, but currently you can't do this if you use Google Chrome or Microsoft Edge. Again, remember that I'm talking about the practical impediments doing this now rather than any sort of conceptual problem: if the industry cared this could improve a lot very quickly.
>> Every year, repeat the process starting at step 1.
> Outside of a device getting compromised, is there a good reason for updating certificates more often than once every 5 years?
The general idea is that it protects against unknown, non-permanent mistakes but I think the main point here is that key rotation should be automated so it can happen simply since it reduces the window of problems for any mistakes considerably. I'd expect a modern implementation to have a tiered approach where e.g. keys generated on a secure enclave, token, etc. are trusted longer than ones where user error makes it possible to get access to the private key.
> > It'd be really nice to have a LetsEncrypt-style effort to remove the warts from this process so it's approachable for normal people without a heavy support pool.
> I'm still doing more research on this, but what did the <keygen> HTML element lack that the process used by Let's Encrypt provide?
> > This is another area where I wish Mozilla hadn't prematurely killed Persona as it'd be really nice if there was a service which would allow you to associate different client certificates with a single user identity so private keys never needed to leave the device
> Shouldn't the private key be something that's associated with the browser? That is, when you install the browser, a private key is generated and used for all certificate signing requests.
I was referring to the two related concepts here: I like the model where each browser manages a private key (preferably stored in secure hardware) but you also need to handle the question which keys are allowed to sign responses for a specific person. Consider e.g. all of the sites which trust Google or Facebook to authenticate users and imagine what it'd be like if that could be extended so you could ask that trusted third-party which keys correspond to a verified email address. Having it be something which is commonly used would also be a great place for rotation if there was a seamless way to repeat the signing process every n days rather than a user having to do it the first time they access a site a year after the last time they renewed, when they may have forgotten a lot of the steps.
That last point underscores how much of this has nothing to do with PKI and everything to do with horrible UI: the failure mode for not having a valid certificate is generally horrible — looping selection dialogs, low-level TLS failure messages with no indication of what you can do to fix things, etc.
Are there any implementations that don't? For example, when I create an account on news.ycombinator.com, does it really need to verify my email address, rather than using a signed message sent via HTTPS during the sign up process?
> Consider e.g. all of the sites which trust Google or Facebook to authenticate users and imagine what it'd be like if that could be extended so you could ask that trusted third-party which keys correspond to a verified email address.
Perhaps we need to rethink using email for verification. For server side authentication, we have certificate authorities to handle verification of a given server's identity. In your example, Google or Facebook (or both) could serve as certificate authorities for the client certificate used for a given website.
Again, I would say that most websites do not (or should not) need my email address in order for me to sign up for an account. My web browser should be able to manage verifying my identity with a website as well as adding other trusted web browsers.
> That last point underscores how much of this has nothing to do with PKI and everything to do with horrible UI: the failure mode for not having a valid certificate is generally horrible
Unfortunately, that is very true. It would be nice if some serious effort could be directed to improve the process. I think that if we were using certificate authentication, as opposed to password based, it would be much harder for people's accounts to be compromised, even through "social engineering".
The hassle is that you need to install the certificate on every device you want to access the page from.
For federated use cases, they are less popular, likely due to the added complexity being a blocker for adoption.
It would be really cool if there was a service like letsyncrypt that would provision "client" certs for this sort of use but revocation lists and things are a little annoying for large scale use.
EDIT: Another reason why this can be a hassle is that while Safari/IE/Chrome use the system to evaluate trusted certs on all OSs i've tested, firefox uses its own implementation so you have to add all the certs yourself. This is ... frustrating from a management perspective because you have to keep two sets of docs explaining what to do for new hires and etc.
EDIT 2: I've always been curious if its a net security benefit that firefox does this. On one hand, they are less vulnerable to OS-specific attacks and can automatically un-trust root certs that are compromised for whatever reason, but you're then trusting Mozillas implementation of something that is admittedly very complex.
The Firefox Enterprise mailing list is the place to go to for deeper level help on these things. 
SSL client authentication is widely used by the US military on smartcards requiring additional hardware readers: https://en.wikipedia.org/wiki/Common_Access_Card. AFAIK using a smartcard doesn't work reliably on Mac without installing 3rd-party software. The difficulties of usage in practice has spawned a cottage industry of commercial software and support, like http://militarycac.com.
I assume companies selling end user hardware tokens like YubiKey would love to see client certificate authentication become more usable, but initiatives like FIDO U2F seem to be gaining momentum instead.
SSL certificates also doesn't work in HTTP/2 (because of multiplexing multiple requests).
Benefits include storing private key in a hardware tokens, most OSes support them out of the box. You can just plug your token into USB port, visit site that requests a client certificate, enter PIN and be done (e.g. Yubico PIV applet).
HTML also has/had <keygen> element that would generate private key in a browser, send the public key to be signed to a webpage essentially creating private/public key credentials but that is being removed from browsers.
For inter-service communication I'd definitely consider using SSL client certificates pinning private keys e.g. to TPM but regular users can't be bothered with it.
If you're interested check out Token Binding that makes tokens (cookies, etc.) bound to TLS connections essentially providing security of client certificates for tokens.
Using it to authenticate a person regardless of device doesn't work so well from a usability point of view.
I would appreciate pointers to any open source libraries demonstrating best practices and/or promoting this approach, specifically protecting against replay attacks and race conditions that come up as the cert is renewed (much more often - thanks Let's Encrypt!).
It might be better to have accounts per device rather than per person in that case.
In my observation, people who try to go this route in an uncontrolled environment mess it up by sending the certificate (including the private key) in unencrypted email (which is the default in most cases) or using other insecure mechanisms. The only ones who'd even attempt this are those who may not go through a security check or audit.
[If there are easy ways to handle this in an uncontrolled environment, I'd like to know more.]
I mean with support: getting the certs in there.
Once they are in your keychain, clientcerts are really nice.
Basically JWT but without the pitfalls as far as I can see.
Show HN: PAST, a secure alternative to JWT | https://news.ycombinator.com/item?id=16070394 (2018Jan:361 points,137 comments)
If your API just serves public non-user-specific data, a simple API key might be okay. The obvious downside of this method is that a user leaking their client API key is a big problem, especially if your users are likely to distribute code that makes requests (e.g. a mobile app that makes requests to your API).
The state of the art is probably still OAuth, where clients regularly request session keys. This means a leaked key probably won’t cause problems for very long. The obvious downside of this is complexity, but that can be mitigated by releasing client libraries that smooth over the process.
There's an RFC that goes into some of the security considerations of OAuth 2.0, that should be required reading if you implement it (even from a pre-built library): https://tools.ietf.org/html/rfc6819
It's crucial that clients are able to respond to their refresh tokens being revoked.
The good thing is that it is a standard workflow, contrary to API key being revoked, which is generally not handled (most people hard-code API key in their client).
For most SasS products, basic auth or an API key is going to be fine. In fact, a ton of SasS vendors do exactly that. It's also totally fine for, say, an enterprise API used by a partner or clients.
Oauth is a cluster-fuck of terribleness, a nightmare for you to work with and a nightmare for your consumers to use. If you do it, you will need to have excellent support docs and examples or have to hand-hold external devs to get it working. The only time I might start considering OAuth is if you want other apps to be granted permissions to use the API on behalf of the user, where you want some granularity of which parts they can access.
I'm not saying OAuth doesn't have a use, but it's awful, overcomplicated implementation means it's a huge time-sink compared to basic auth over HTTPS and I certainly wouldn't recommend it without a very good reason.
And woe betide you if you're not using a framework that vaguely plugs + plays oauth.
Couple that with all the shennanigans involved when trying to get two servers to talk to each other without a human involved in oauth.
* Secret not included in request
* Verifies integrity of the message (since its contents are signed)
* Protection against replay attacks
It's probably overkill in a lot of situations, but I've always liked how even if TLS were compromised, all the attacker would gain is the ability to see the requests--not modify them or forge new ones.
I haven't used JWT before, but reading one of the links below, it looks like it covers a lot of the same stuff (although you'd have to implement your own replay protection if you want that).
curl -X GET https://127.0.0.1:8000/api/example/ -H 'Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b'
This is what comes out of the box with Django Rest Framework.
FWIW I have ended up using OAuth2 for this situation a few times, and it always feels more complicated than I'd like.
But we are not alone in this regard. Bridge building is centuries more mature than software engineering and their shapes, materials and construction methods still change regularly. I would expect to see this trend remain for a long time. Engineering is an inherently creative practice, often staffed by appropriately creative people. The continued evolution, including the trial and error approach, are likely to continue for decades.
Also, here is a good recent video for ASP Net Core 2, that includes extra things like HSTS, etc. Even if your not in ASP, the concepts will be relevant https://www.youtube.com/watch?v=z2iCddrJRY8
The first three could be implemented easily inhouse and are suited when number of consumers are small. Its better to use some third party SAAS service or OAuth server for the fourth one. I work professionally on these things and these implementations can be time consuming. More than often people dont take into account their requirements when choosing a solution.
If you don't mind running a PHP application, or it being built in Laravel, (I don't, but some do) it's actually a really good implementation of a solid OAuth package (Disclaimer: I am a maintainer on oauth2-server).
You can set this up in a couple of days, and it'll be ready to roll for the majority of use-cases. With a few simple tweaks and additions, you can have it doing extra stuff pretty easily.
In one build I'm working on, this acts as just the authentication layer. The actual API that relies on the tokens this generates sits elsewhere and could be written in any other language (it's not in this case).
WEB API that a device needs to authenticate to.
Can't store password on device (it's a device we don't control).
No user, so authentication has to be all autommated.
i.e. we need to run software on a clients machine, and it has to authenticate to our web api to send us data.
We obviously don't want to hard code the credentials in the software as that can be trivially extracted.
Am I missing something, or have you painted a contradiction?
* You want the device to hold some secret
* You want the device to be able to prove that it holds the secret
* You don't trust the device to hold a secret
If I'm understanding this correctly, then you've left the realm of cryptography and entered the realm of obfuscation.
This isn't necessarily a losing battle, but it changes the way we need to think about the problem.
Games consoles and DRM'ed video media (Blu-Ray and HDCP) do something similar in not trusting the end-user: they want to hold the key to the kingdom whilst ensuring the user never sees it. They've done this with varying levels of success.
Can you expand on this? Because storing a device-specific password (or api key, which is essentially the same) would be my first suggestion.
If it's because you can't configure the device, then my suggestion would be to create a process that embeds the device key into the software before deploying to each particular device.
We could have a password entered by our systems guys who deploy to a new machine for the first time, the service encrypts and stores that on disc, then each time it wants to talk to us it can decrypt its password.
I'm not sure if that would be a good solution, or is it just as insecure as having password in the code.
This technique is referred to as "Mutual Authentication": http://www.cafesoft.com/products/cams/ps/docs32/admin/SSLTLS...
Basically, it's 2-way SSL. You use signed SSL certs to authenticate the server to the client and the client to the server. You could use your own cert signing server or employ a third party cert signing service.
Using this method, your techs would need to set up the SSL cert for the client machine when installing the software, or, the SSL setup procedure could be part of the software installation procedure.
Interesting idea that may solve your problem. Hope this helps.
In this case, I don't see any automated check that can verify that the client trying to renew the cert is the original device, so there's no point in limiting the lifetime of the certificate, unless you send a person to do that verification manually.
Is there a reason you don't want to use tokens? Upon authenticating once (admin, manually), the web service would generate a token, which it would store and potentially have to revoke.
With something like OAuth, the token could be more temporary and automatically replaced during each use, to avoid having one secret (whether it be a password or token) that could be leaked and used by multiple clients.
It's just as insecure, as the software would need to store the decryption key itself in plain-text.
But why are you so concerned about keeping the password secret? As long as each device has a different password, you can identify abusive uses (too many requests, or from multiple sources, etc) and block that account.
What do you fear that the client could do with the device password?
Kind of like the (in)famous Denuvo
Which obviously can be cracked, but it takes a long time.
That way, the breach of a single device doesn't immediately give the attacker unlimited access to the API.
You should also monitor for unusual activity, and blacklist API keys and devices with such activity.
2. Generate pregenerated-key upon first login (maybe based on email or tel no?). Just like, e.g., Signal does it
3. On your servers, check if pregenerated-key and/or email is used more than once at the same time, if so invalidate it and direct user to 2.
We monitor for the same login being used twice at the same time and disconnect both and delete the account.
If you happen to be using Azure. I found it very useful for everything you'd want to do with your API, one of them being able to tie down security as much or as little as you need. It even builds a front end for anyone who has access to use for reference. But that's just one of the cool features.
eta: I don't work for them, but really no need to roll your own.
We have implemented it for authentication with a Asp.Net Core webservice, with a REST based API. Authorization is also possible, either by working with the JWT token scopes, or using the Auth0 app_metadata.
If you want more, then use username + pass. Encrypt both or generate something from both of them. Eg. encrypt(username):encrypt(pass)
If you want more, use private & public keys, which receive a session token the first time ( when authenticating).
I think the end result would be a self hosted oauth server with permission management.
Some APIs also use self-invalidating auth tokens based on an expiry date. For more secure data, I'd prefer that.
http://page.rest does a good job.
Create a token, put your userId in it, set an expiry date.
If a request comes with a token check if token is valid, check the userId & expiry date otherwise throw error.
I believe the meta is that jwt is solid itself but allows doing things "wrong". Guardrails so to speak are insufficient if not outright lacking.
I'd say just go with plain text token for a web app. I don't like the idea of trusting the client because I don't understand how jwt works.
I am interested to know whether HN thinks this is considered secure?
For users, I'd add a OAuth layer to the application layer and still have this application using a HMAC like above. You want to try keep things 'stateless' when it comes to your API's.
For users you'd need some way for the users to "fetch the secret", which is effectively what logging in is. At that point you should just use JWT or oAuth.
Bonus: If you stay close enough to the standard you can plugin a real OAuth 2.0 provider if/when you decide you need it.
I think we're thinking the same thought, maybe my terminology is sloppy.
Suppose we just say "Use this token-generation endpoint (with your credentials) to generate a session token, and attach that token by means of OAuth 2.0 Bearer Token in subsequent requests to other endpoints".
Doing that, we can easily scythe off any bloat, no? We don't care about people signing-in with their Google accounts, or anything like that. Or is that what 'client credentials flow' means?