PSA: Don't outsource either your authentication or authorization. Run it in-house. Don't give away the keys to the kingdom to a third-party to fail like Okta or lose the ability to police access that becomes incredibly difficult to audit and classify. There is no panacea work or headaches alleviation by relying on SaaS when the business risks are too great.
Building authentication in-house is non-trivial to say the least. It is a dynamic service that fundamentally needs a database and is fundamentally accessible both to the wider public Internet (networking-wise) and to unauthenticated users (which attackers always start out as), which makes it incredibly difficult to secure. DoS, keeping everything patched on a moment's notice, rate limiting clients, emailing resets, ensuring your emails continue to be deliverable, suspending accounts hit by credential stuffing, having support processes for unsuspending accounts that are not themselves susceptible to social engineering... the sheer complexity of security engineering required to do this correctly and at scale is not even remotely trivial. Not to mention very strict performance/latency requirements because auth is overhead on everything, which gets to be difficult once services start to scale globally, and runs into direct internal contradictions when revocations compete against internal caching.
I appreciate the business risk of depending on a third-party for part of the foundation of your value offering, but the truth is, if you're not running your own datacenter and hosting your own databases (and therefore your databases are located in some vendor's offering, be it co-lo or cloud), you're dependent on a vendor for part of the foundation of your value offering. In and of itself, that's a pretty normal, expected, and acceptable business risk to take.
Not all businesses are the same, and I'm sure there are some where you need to run auth in-house (like if you're selling on-prem software), but I'm mindful that most companies simply cannot run auth securely. And when their security fails, their users' data leaks, and in a modern era, that too is a real business risk.
I'm sorry, I'm not against using 3rd party services, but such a strong statement about so relatively not complex functionality I've heard 1000s of times in my 24 years career and these never stood itself for long.
Every then and now I hear some IT "priests" screaming don't do this, do that and people follow it. But I see it always leads to something being "cheap, easy and fast", being actually expensive (at scale), difficult (to adjust to specific needs, holding back the business) and slow (because of all work-arounds) :-D
Companies literally burn a ton of money in long run because their CTO can't get them proper evaluation of what is really needed for their expected scale. And even large business with billion of dollars revenue may not need really an infrastructure of google scale.
I just use MTLS, MDM software like intune deploys the certificates.
Elsewhere in the org we have openAM which ties into AD and provides an OIDC/SAML integration.
Your blanket “don’t do authentication” statement is filled with so many specific circumstances and caveats which don’t apply to many difference cases it just sounds uninformed and thus arrogant.
GP did agree that there are some cases where this doesn't apply:
> Not all businesses are the same, and I'm sure there are some where you need to run auth in-house (like if you're selling on-prem software), but I'm mindful that most companies simply cannot run auth securely.
I couldn't disagree more strongly; you will screw this up if you roll your own. The only secure option is to push as much as possible outside of your infrastructure. The concerns that it won't be "auditable" or that you "give away the keys to your kingdom" are nonsense and implementation-specific. If it's important, then don't build your integration with those flaws. You will still have substantial control, so you can decide how "auditable" the auth actually is and what to do with users authed from one platform or another.
If you can avoid managing authn, you will be substantially better off. I don't think you can so easily do this for authz; however, it depends on what you're building.
> you will screw this up if you roll your own. The only secure option is to push as much as possible outside of your infrastructure.
In my experience the integration with outside infra (OAuth, OIDC etc.) carries significant complications in itself, and Im not convinced that you really gain that much in terms of reducing complexity or attack surface by using these technologies. See this article that was on the front page yesterday for an example of non-obvious things that can go wrong https://trufflesecurity.com/blog/google-oauth-is-broken-sort...
> Im not convinced you really gain that much in terms of reducing complexity or attack surface by using these technologies.
In the case you mention (which I posted), integration of some kind offers functionality that would be otherwise impossible to get--single sign-on which lowers friction of user acquisition. SAML is one alternative method, but from my experience it has some sharp edges too.
Of course, whenever you're integrating a third party solution, you need to do so carefully.
Our current production deploys for our B2B/SaaS banking product run "in-house" authentication.
We were essentially laughed out of the room when we showed our biggest prospect how we managed users in our legacy product. I felt really smug about my fancy hand-coded user management tooling until about 50% into the presentation when I could see an equivalently-smug look + mild sideways head shaking coming from the customer CTO. Running in-house authentication is literally a joke to many of these orgs in 2023. Most developers probably won't ever experience this. I think it is a shame, because this one experience forced us to embrace a new kind of light that much of HN probably won't ever see. Our roadmap was accelerated by this rejection.
Realizing that we might have lost a multi-million dollar customer over an ego trip was a bit of a wakeup call. I have some skin in this game. I don't know about the rest of HN. At a certain point, it's about making money and being able to sleep at night. Running in-house anything typically works against that (i.e. spending finite innovation tokens). Like it or not, if you are dealing with a big crusty organization, outsourcing authentication might become mandatory at a certain point.
Further, if something goes wrong and one of our customers gets breached, I don't want the investigation to wind up at a line of code in my codebase. That means I have to get involved in legal proceedings. I am completely out of the game of touching end-user passwords or MFA tokens now. We get claims from an IdP and that's that. We don't even write SAML/OIDC integration code because our tech partner has done that for us by way of a happy DDL election on our FaaS runner config. The authenticated user's claims are injected into our HTTP trigger function arg list, and since we do stupid-simple PHP-style SSR web apps, the rest is fairly obvious. We still have a "Users" table, but all that password hashing/salting/iterations crap is replaced by a simple UPN string field (i.e. the user's email address).
Above all this, use case matters most. If you are unregulated and dealing B2C with 100m customers, you'd have to be the maddest hatter to ever live to consider paying someone else for authentication duty like we decided to do. If you are working B2B with US banks and other regulated orgs, the exact opposite appears to be true.
1000x this - each and every CAIM has their own "interpretation" of what the various constructs actually mean in the various RFCs and it leads to a lot of hodge-podge integrations that organizations outgrow. Things like Okta and Auth0 might seem shiny and easy on the face of it, and they are for very small startups, but quickly devolve into dogshit as you scale.
Ah, someone else here has probably encountered CIAM pain too. ;@]
For internal use, FreeIPA is neat (389ds + dogtag). Some people use AD or Azure AD (which again, is outsourcing). Shibboleth provides a FOSS SSO solution. https://www.shibboleth.net/products/
For k8s IdP, there are numerous FOSS, on-prem solutions like pinniped.
I wonder why this got downvoted. I looked over the alternatives a while back, and I thought Ory looked the best too, with Keycloak as a good number 2.
I agree with the OP that it's good to not be dependent on SaaS for something as central as user management.
I do agree with the others that it's best to write as little as possible of security critical code yourself. However, integrating with a third party OAuth thing can be hard (and leave plenty of room for security fuckups) especially if you have anything implemented at all already user-wise.
There are plenty of solutions you can run in-house like Keycloak or PingFederate. Not using Okta or Azure AD doesn't meant hand-crafting your own tooling.
> Don't outsource either your authentication or authorization
This is why a solution like FusionAuth (my employer) or Keycloak is a good option. You can self-host the authentication server.
If you do that, you outsource the development of the auth solution, but keep the data and the operation of it firmly in your control, lowering the business risks.
As other comments mention, there's a lot that goes into building and maintaining auth, and it's rarely a differentiator for your application.
Running your own system also, frankly, lowers the target value of your system. Okta and other centralized auth solutions are of greater value than the users of only one system. Of course, they can also hire specialized security experts on a scale you might not be able to, so you have to balance those risks too.
I agree, with the implicit "build it in-house with OIDC certified 3rd party dependencies ". The foundation already goes a long way towards providing conformance suites and several certificates. I don't understand how we came into this status quo of "I'm just gonna outsource users data, XP and all to this company I have little transparency over, and its gonna be fine because I don't waste time maintaining infrastructure for auth which is my core business, much better to maintain the several sync points, settings pages they won't possibly do, and waste man hours inquiring about why this user password changed when they are telling us it didn't, and I can't tell them to ask Auth0?"
They're desperate to get off keycloak at my org. Not sure the specifics, but our users complain about Auth issues often, and the keycloak web interface is painfully slow too.
I’ve read you need to partition usage to limit each keycloak instance to about 1500 total users to avoid performance issues. Does this hold true in your org?
I work with an org that has 10's of thousands of users on keycloak, so there must be a way around the problem. I'm not administering the instance for this project, so I have no direct insight into how they deal with it.
It looks like it's 'entities' and not just 'users'. From the docs [0]:
Keycloak allows you to create any number of realms and any number of clients and users in them. But you need to be thoughtful as you scale up because as the number of entities grows, Keycloak slows down. When you log in as a superuser in the admin panel, even if you have only 1,500 realms, it will take a few minutes or even crash on timeout. Creating a new realm will take about 20 to 30 seconds. You need to change your logic and interaction with Keycloak.
I have to disagree with your assessment. It's not that rock solid, "supports everything" is a very vague term and "trusted everywhere" I'm not even sure that that is supposed to mean.
I work for FusionAuth which has a free community edition[0], so I'm a bit biased. You can read some of our community stories talking about what folks have built on top of it[1].
Other alternatives I've heard mentioned for self-hosting include:
* Ory
* Platform specific OSS (Devise for Rails, Passport/NextAuth for javascript, Spring Security for Java, etc)
* IdentityServer (may have to pay something now, not sure)
FreeIPA is pretty good for internal users when money is tight. It's a FOSS almost AD but for *NIX and it does HA. For the SSO part, Shibboleth2 or CAS.
It supports various backing RDBMSes (like PostgreSQL, MariaDB/MySQL and others), allows both users that you persist in your own DB, as well as various external sources, like social login across various platforms, is an absolute pain to configure and sometimes acts in stupid ways behind a reverse proxy, but has most of the features that you might ever want, which sadly comes coupled with some complexity and an enterprise feeling.
I quite like that it offers the login/registration views that you need with redirects, as well as user management, storing roles/permissions and other custom attributes. It's on par with what you'd expect and should serve you nicely.
This one's a certified OpenID Connect Relying Party implementation for... Apache2/httpd.
Some might worry about the performance and there are other options out there (like a module for OpenResty, which is built on top of Nginx), but when coupled with mod_md Apache makes for a great reverse proxy/ingress for my needs.
The benefit here is that I don't need 10 different implementations for each service/back end language that's used, I can outsource the heavy lifting to mod_auth_openidc (protected paths, needed roles/permissions, redirect URLs, token renewal and other things) and just read a few trusted headers behind the reverse proxy if further checks are needed, which is easy in all technologies.
That said, the configuration there is also hard and annoying to do, as is working with OpenID Connect in general, even though you can kind of understand why that complexity is inherent. Here's a link with some certified implementations, by the way: https://openid.net/developers/certified-openid-connect-imple...
Please don't write your security code from scratch and lean in the direction of just gluing various tested options together.
Heya, you might want to check out FusionAuth community edition (my employer). It's very comparable to Keycloak and definitely simpler to set up and run. It's free for unlimited users.
This advice is wrong on some many levels. Building your own OIDC/OAuth/SAML infrastructure is incredibly expensive, error prone and dangerous; and not mention you need people to code it, run it securely, audit it, test it.
Even running something like a self-hosted Keycloak instance/cluster is not easy either and comes with its own set of problems.
Have you actually rolled out such a solution yourself? Yes, the contractual obligations of the protocol are deceptively shallow, but that is the easiest part. We’re not just talking about assembling a JWT and asserting it. We’ve got to hook up all the plumbing for signing, key management, credential management, various auth flows (after all, not all apps are created equality), RBAC, SSO, federation, custom rules, and all the accoutrement in between. Yeah, great, you can hack in a solution for your simple app, and if your use case lets you get away with it then go with god. A fully compliant system, batteries included, is a staggering amount of work.
OIDC seemed complicated to me for a while. What helped the most was: 1) focusing on understanding why all the song and dance is necessary from a security perspective. Start with [0]. 2) Realizing you can ignore pretty much everything except the OAuth2 spec and the core OIDC spec until you need it.
1. The OG OAuth2 spec never said anything about identifying principals
2. OIDC mandated an id_token to avoid the hit of POSTing the access token to an introspection endpoint
3. The shape of the id_token is a JWT
This doc is junk, btw - it's suggesting the client use the id_token for the purposes of authentication? I wouldn't trust that token beyond some basics, unless you have very strong controls over the authorization server and IdP to ensure things like email verification are propagated correctly.
> OIDC mandated an id_token to avoid the hit of POSTing the access token to an introspection endpoint
A bit more fundamental than that, the two tokens are meant for different purposes and different audiences.
The access token is meant for delegated authorization of a user against resources. A client isn't even meant to be able to interpret it (but often can, because they are often signed JWTs). In particular, clients aren't supposed to use the introspection endpoint.
The id_token is meant for the client itself, about the user. This isn't meant to be shared with anyone other than the client, and must be audienced (only usable by the client).
Moreso, access tokens may be indefinitely long-lived access to resources, such as offline reading of a calendar for group scheduling free/busy information. An id_token is a signal at a particular point in time of state, e.g. this browser request represents this user because they bear the id_token.
To illustrate why OIDC introduced audience validation:
When you separately request user info from an endpoint with an access token (which according to OG OAuth2.0 is just an opaque string, which cannot be validated) that access token could be someone else’s, possibly from a user who logged into a different, malicious application which somehow managed to trick you into using that token
I played around with consuming a few different OIDC providers as close to the raw APIs as possible (not component libraries or client SDKs) just to try and understand the flow.
Each provider implemented the spec subtly differently and caused a massive headache to get working even for just a basic auth code with PKCE flow.
I'm still none the wiser on why you'd even want OIDC for most apps. Granted if you have much more complex SSO needs with external IDP integrations and OAuth API stuff etc it can make sense. I don't see why it is preferable over basic forms auth with a cookie.
The default answer seems to be to support mobile apps but I'm also none the wiser on why a JWT auth token is preferable to store on device and send with every request vs a cookie. They're both just 'keys' that need to accompany every API request and need to be stored securely. OIDC-by-default seems like overengineering but I appreciate I'm probably missing a critical reason.
If you can get away from OIDC, don't touch it. It's only usefull if you don't control the source of users, need to have some seemless login across many apps or if your app is integrated with many other apps for which you need something that work everywhere and that is a standard. OIDC and SAML are the lingua franca when you need that kind of capabilities and those scenario are just the one I've faced, there's a lot more to it
> I don't see why it is preferable over basic forms auth with a cookie.
Cookie is how you typically persist session on a browser, this is the step that usually goes after you've authenticated / been authorised, the scope of OIDC is purely in the authentication/authorisation and doesn't even specify how you should persist the session. Also the scope of OIDC goes beyond the browser
> OIDC-by-default seems like overengineering but I appreciate I'm probably missing a critical reason.
By all means, if your use case doesn't need it, don't touch OIDC/SAML it would be like using kubernetes to host your mum wordpress. OIDC and SAML are complicated because SSO is made to fit the need of companies with all sort of use cases that goes beyond simple use case like the one you seem to describe.
So much this. I work on an educational product, and we support something like 5 or 6 different school OIDC providers and have to maintain a separate implementation for each one. They all use the same base class, but there’s just a stupid amount of variation in how they actually work.
Can you share the names of these OIDC providers? I've worked some with schools and it seems like they all have settled on either Azure AD/Entra or Google, but I'd like to learn more.
I wholeheartedly agree with your last sentence that is seems overengineered, but one has to assume that there are use cases that warrant such a complex auth scheme. I cannot speak to that as I have never implemented systems of massive scale.
However one thing that puts me off is that it slowly becomes the default way of implementing authentication and authorisation at work where our internal (web) services are (at best) used by 10s of people on a given day. Besides the "old" way of auth integrated much better into our existing system landscape while the "new" way requires (in our case) a Keycloak server.
Again, all that would be fine if the use cased warranted the complexity, but in my employer's case it does not.
A session cookie from app1.domain.com isn’t readable by app2.domain.com. So with plain cookie auth, you have to login to every app individually. Is there a simpler way around this than OIDC?
And what if you want to gate your services to only those users who have an org-issued yubikey? With OIDC you can delegate the device check to a single host (your IdP) and if apps speak OIDC they’ll be protected. That means MFA SSO!
I assume from what you are describing that OIDC is probably just the right tool for this job.
In my problem domain (think internal apps that serve many different purposes with little overlap and a diverse set of users) logging in individually to every single app simply is not an issue, albeit a little annoying).
That being said we do use a single authentication backend across all apps, just not one that is capable of OIDC and is thus a lot more limited in what can be achieved with it.
I don't understand the diagram. We have a browser sending a request to the identity provider. Then sometime later, the relying party is responding. How does the response come from a different server than the request went to?
Maybe it would help if each response was correlated to its corresponding request somehow.
It’s missing a step or two. Once you’ve selected your identifying provider, the user is redirected to it, identify itself and select there the claims he’s willing to share (most of the time only what’s needed for authorization). The identifying provider generate the proper token and claims, return them with the user to the third party site. Who can then authorize or not the identified user and return the user.
This exchange of data and redirection is the core of the oauth.
The app server requesting access has to be configured with the identity provider. Part of that is a redirect URL, which the identity provider redirects the client to once the authentication is ok. The URL is to an endpoint of the app server.
When the id provider redirects, it includes the authorization code as an URL parameter or similar, so the app server can pick it up and call the id provider to exchange the authorization code for an access token.
At least that's how I remeber it off the top of my head.
Thanks for the feedback. I created the diagram and would love to improve it.
I skipped a bit (as other folks have mentioned) and the 'authenticate the user' and 'obtain the user consent' steps both involve requests to the browser. I'll update the diagram to make it clearer.
I think because the relying party is "relying" on the OpenID provider to authenticate the user, but I wasn't able to find the authoritative origin of the term.
Question for everyone saying that you should always outsource authentication, how do you handle the on-premise case? Or does this only apply to pure SaaS software?
As long as the app only depends on the standard, you have some flexibility.
For example, you can outsource authentication to a third party OIDC server on SaaS deployment and run a self hosted OIDC server like Ory Hydra for onprem deployments.
My read of this is 'don't write your own auth library'. If you want to rely on someone else to host it, or host it yourself, that's up to you and entirely fine. Different pros and cons of each approach; neither is perfect.
been working with keycloak at work for years now. handles several hundred thousands users every day and works fine. we upgraded some of our keycloak servers to redhat sso (which is keycloak but with redhat support) and its a joy to use.