I recently went down this road for my home lab and went with Authelia
Keycloak works, but it's a behemoth and still needs further services to work with traefik forward auth.
Authelia works great, you don't get a UI to edit users, and it's not a two-way sync between a backing LDAP server, but the fact that it can be configured with a static file + environment variables makes it a great fit for a lot of cases. If you're just looking to add auth to some services, and SSO to those that can work with it, I'd suggest starting with Authelia.
If you are looking for SSO + easy configuration + an admin UI (which admittedly has a mid 2000s UX look and feel), you should check out FusionAuth. It's free to download and run yourself[0], docker friendly[1], has a variety of configuration choices[2] (including terraform[3] to manage your OIDC/other settings).
Worth noting that it is free to use, but not open source[4].
I can only speak from my perspective as an employee, not the whole company. It is something I've thought about. I will also ask the CEO/founder or other leaders to weigh in.
Many devs care about open source when they are evaluating a solution, but many really want "free as in beer". They want to try a product without getting out the credit card or engaging with sales. We cater to the latter category, which wants to understand the product quality without talking to any sales people.
Some of these folks use our community product for their production needs, which is perfectly fine. We have people running FusionAuth in production with 1000s of tenants or 10000+ applications. (I always say they "pay" us by filing bug reports, giving feedback and voting on feature requests.)
But some decide they want to pay us for hosting, support or advanced features. Those choices help us build a business.
Devs, and especially buyers, are interested in sustainability of a product they are going to integrate into their system. An auth provider isn't a javascript widget that you can easily drop in or remove from your system. It embeds in your system, even if you stick to a standard like OIDC, and is difficult to switch from, especially at scale. You want to be sure the company and product is going to stick around. (If you want to make sure you can run the product even if everyone at FusionAuth wins the lottery, we do offer source code escrow for a price, but haven't had anyone take us up on it.)
FusionAuth is a profitable company (we did recently raise a round to accelerate growth, you can read more about that here [0]). Open source companies often have a hard time meeting the profit goals of the market or investors. This is a known issue and often results in relicensing or changing the rules, as we've seen with Hashicorp[1] and Elastic[2]. This switcheroo can cause issues, confusion, and ill-will; FusionAuth's licensing avoids that.
FusionAuth also develops in the open[3]. This open development process gives us another common benefit people get from OSS--community feedback.
Also, I don't want to throw shade at Ory and Zitadel, since I have no idea about their finances (apart from a brief look at Crunchbase, which shows they've raised 22.5M[4] and 2.5M[5] respectively). I hope they're building sustainable businesses, but selling closed source software is a sure route to a profitable business that has built many big companies (including in the auth space, such as Okta or Auth0). Again, this is not FUD (or at least I don't intend it to be!), just an honest assessment at the difficulties of making money in open source dev tools [6].
We also compete on features, support and documentation. Again, I can't speak to Ory or Zitadel; they look nice, but I haven't built anything with them, so it is hard for me to know how good they are. I do know that we have had many clients appreciate of those aspects of our product.
To sum up:
* FusionAuth has a free option, which helps reduce friction and gives some of the benefits of OSS. The open process and escrow also give some of the benefits of OSS.
* Some devs and buyers care about business sustainability, especially when integrating a tool deeply into their application. FusionAuth will never have to worry about relicensing a version because AWS is eating our SaaS revenue stream, for example.
* We offer great support, documentation and intricate auth features at a reasonable price.
>> It embeds in your system, even if you stick to a standard like OIDC, and is difficult to switch from, especially at scale.
This is precisely why a real, Open Source solution should be a top priority feature for anyone evaluating software like this. We have seen closed source solutions, time and time again, "realign" their business by dropping features, or even entire, free plans. In some cases, closed source software has been removed from the market all together when the company gets bought out. Changes in pricing plans leads to a worse fit for customers... etc. At least when Hashicorp and Elastic abandoned Open Source, the community had the option to say "yeah, good luck, thanks for the good times, but no thanks on your new direction" and fork the product to continue maintaining it. When a closed source products owners decide to "shift gears" on the product line up, customers are left between a rock and a hard place. Maybe if there were no Open Source options, but fortunately, there are several in the auth space.
I've been thinking lately how important it is that software be not only open source, but simple enough to be forkable. The fact that Keycloak is open source almost seems irrelevant given how massive the codebase is.
Thanks for the detailed answer. If people are already taking advantage of your free offering, how would it be different if you released the source? If you're worried about another company piggybacking off your work to start a competitor, why not use one of these newfangled non-compete licenses? That still gives users the security of being able to for the project internally if you go under.
Sorry, on re-read, this sounds kinda abrupt. My bad.
I think at the end this is a business decision. The executive team has considered options and decided that closed source is the right path for the company.
How do I integrate this with a reverse proxy like Caddy and their forward_auth directive? I want to secure my apps on the proxy layer, not the app layer.
You'd have FusionAuth issue the tokens through an authentication event (typically the authorization code grant for interactive sessions). Lots of flows with sequence diagrams outlined here [0].
Then store the tokens on the client. For browsers dealing with first party apps (everything under the same domain) we recommend cookie storage of these tokens [1].
Then have Caddy examine the tokens provided by the browser. Here's an example Caddy config I put together for a workshop [2].
Finally, depending on your security posture, you might want to verify tokens both in app and at the proxy.
Hey one of the Authelia developers here. Authelia works out of the box with Caddy's forward auth directive. In fact I helped Caddy develop and test the feature to ensure it had feature parity with an existing implementation/spec.
You have very granular options to customize the experience allowing multiple sites to have the same or different rules, which can alter the authentication requirements based on remote ip, users, groups, paths, domains, etc.
If you look for 2FA with otp, email, sms and also passkeys foe self-hosting, then give zitadel a spin. All features are included in the open source version. Should also work nicely with docker compose + nginx. In case you have issues, join the chat.
https://github.com/zitadel/zitadel
I have Authelia running for 2+ years already. I configured it with LDAP using "LLDAP" [0], a lightweight LDAP implementation. I then use Caddy as a reverse proxy and integrate it [1] with Authelia. This works great. I have solid 2FA for all my services and I feel my self-hosted applications are secure enough to be accessed without VPN. My only concern is that Authelia hasn't had a new release for more than a year, which raises security concerns.
> My only concern is that Authelia hasn't had a new release for more than a year, which raises security concerns.
I'm a bit concerned about that too. When setting it up, I found a lot of their docs on github mentioned they have `template` and `expand-env` "configuration filters", then it took me entirely too long to realize that while the 4.38 pre-release notes, posted in January 2023, say it's "just around the corner", it's still being worked on.
Having said that, there still seems to be somewhat active development. It may just be one person at this point.
It's me and two others though I'm definitely the most active. We put a lot of effort into security best practices and one of my co-developers is currently reviewing the 4.38.0 release. It's a fairly major release with a lot of important code paths that have been improved for the future.
Hey one of the Authelia developers here. We're very actively working on a very large release (it's just going through the peer review process and it should be good to go) and we currently have a pre-release for users to dive into.
I can understand the security concerns but we are regularly taking measures to ensure no zero-day vulnerabilities exist, there are no known vulnerabilities with Authelia at the present time either directly or via the code-paths of dependencies we actually use.
If you'd like a newer build of the pre-release they are available. Feel free to reach out on GitHub Discussions (may not see it here but see how we go).
I also went down this road recently, and discovered caddy-security, but I have security concerns [0]. Software always has vulnerabilities, but this was enough to scare me off. Something like keycloak or authentia seems more tested and secure.
Authelia has a very unique advantage of its small footprint, which is something that neither Keycloak nor authentik (which I work for as a disclaimer) can really fit into. It’s a very good usecase for a homelab environment and if you don’t need/want features of the “bigger” solutions!
Tangential, but are there any alternatives to LDAP today? I googled around and found nothing. Simpler schema, JSON, http, etc. I can't believe nobody has attempted to recreate it when people have for basically everything else. AFAICT the only real other standard is Active Directory.
And secondly, is there a standard or protocol for offloading access control? I see Authelia allows you to contol access by url patterns, but I'd expect e.g. a fileserver to instead reach out to the LDAP server and check permissions based on the authenticated user id and keys in the database. This seems like the opposite of oauth2 which is for a server getting access itself to a 3rd party service.
Really? It prefers a database, sure, but you can also store on disk. And you can also configure the main user with env variables.
It starts within <3s on my toy server since they refractored it a few years ago and uses less then 100mb ram (again, toy server, not many users etc)
Idk, calling that a behemoth is kinda a stretch at that point...?
The thing that annoys me about keycloak is how they decided to ship it. I really don't want to maintain a CI Pipeline to deploy it .. but you're kinda forced to with how they've designed their docker image.
Not an issue for enterprise as they're gonna be doing that anyway, but annoying for home servers
Behemoth may be a bit of a stretch, but the container is 20x the size of authelia, it's minimum recommendations are 512mb ram, and 1g of disk space. Compared to Authelia, which is using 30mb of ram and 500k disk space, yeah it's big.
Not to mention, only the admin user being configurable via environment variables isn't enough. With Authelia, I can share my homelab setup and with a couple of environment variable changes, people can have SSO integrated. There's no need to write guides or grab screenshots to help them get set up.
have you looked at the codebase? it's been a while but I was implementing Keycloak a few years ago and it was shocking how big the codebase is and how difficult it is to change things to add what felt like basic functionality. making plugins didn't seem like a viable option either.
oh not to mention the statefulnes of it, it was almost impossible to destroy and re create an instance from scratch without a bunch of manual point and click via the UI.
It is built on a plugin architecture, so plugins are certainly a viable option and this is documented in more detail here[0]. In general I have found the Keycloak docs thorough and well-written. When I operated Keycloak I built a few plugins to solve specific needs/assumptions we had around IdP when migrating to Keycloak from a bespoke solution.
Re: your second point, the docs also describe this in detail[1]. Having the realm data exist in a simple form that can be exported/imported was very useful. However, I would have liked if they thought more about how to do live backup/restore; perhaps that is easier now than it was at the time.
A lot of problems, actually, and most people don't have many of them. If you just want an OIDC server in front of your self-hosted apps you can solve that with a much simpler and faster tool.
The docs can say whatever it wants, there were large parts of our configuration that wasn't included in an export, so we couldn't automate provisioning.
Had a similar experience and even filed a couple of bugs back then. I don't know the current state but back then it felt like having something that looked like halfway modern Java but still carried around large amounts of old school JEE cruft. Probably was the migration to quarkus though. So it probably got better?
Interesting... I had the exact opposite impression. The codebase is big but very easy to understand and their SPIs[1] enable to customise Keycloak's behaviour quite easily.
For the statefulness using terraform[2] solves the problem for me.
> 2024-02-11 17:15:35,764 INFO [io.quarkus] (Shutdown thread) Keycloak stopped in 0.064s
> 2024-02-11 17:15:37,754 INFO [org.keycloak.common.Profile] (main) Preview feature enabled: token_exchange
...
> 2024-02-11 17:15:44,694 INFO [org.infinispan.CLUSTER] (jgroups-8,a4c127cdee40-48683) ISPN100000: Node 72d48695e84c-46513 joined the cluster
so 7 seconds altogether on that restart, though a lot of that time is spend waiting for other nodes before it bootstraps the cluster.
(its single node as its a toy server, as i said before)
Recently I looked into having a relatively simple SSO setup for my homelab. My main objective is that I could easily login with Google or GitHub auth. At my previous job I used both JetBrains Hub [1] and Keycloak but I found both of them a bit of a PITA to setup.
JetBrains Hub was really, really easy to get going. As was my previous experience with them. The only thing that annoyed me was the lack of a latest tag on their Docker registry. Don't get me wrong, pinned versions are great, but for my personal use I mostly just want to update all my Docker containers in one go.
On the other hand I found Keycloak very cumbersome to get going. It was pretty easy in dev mode, but I stumbled to get it going in production. AFAIK it had something to do with the wildcard Let's Encrypt cert that I tried to use. But after a couple of hours, I just gave up.
I finally went with Dex [2]. I had previously put it off because of the lack of documentation, but in the end it was extremely easy to setup. It just required some basic YAML, a SQLite database and a (sub)domain. I combined Dex with the excellent OAuth2 Proxy and a custom Nginx (Proxy Manager) template for an easy two line SSO configuration on all of my internal services. I also created an Dex Docker template for unRAID [4].
In addition to this setup, I also added Cloudflare Access and WAF outside of my home to add some security. I only want to add some CrowdSec to get a little more insights.
Great addition. I remember that I also looked at Obligator and saved it to my bookmarks. But I decided against it because IMHO the project was just a bit too young. Normally I tend to ignore that, but I really didn't want to switch auth/SSO solutions in a couple of months time because of a lack of maintenance or something like that.
Dex only acts as a federated identity provider. Unlike oauth2-proxy which acts as a service provider for services that don't have authentication themselves.
It is a great project. I’m using it since two years and can only say good things. The maintainer is very responsive and most recently a change was made to both keycloak and keycloakify to future proof its approach to theming.
Currently it officially supports only apps made with create-react-app, but there is a vite branch in development and personally we use it with vite since a while.
Ooh, thanks for that. I work for a competitor and we're reworking our theming and this seems like a great project to review and model the rework after.
Authelia looks really good to me, but the fact that keycloak has connectors for angular and you need to setup oidc angular plugins with authelia for example made me a little bit wary. But I guess having a config for Keycloak makes it's easier to get started.
For anyone, considering authentik, I want to warn you by saying "here be dragons."
To start, I have protected 10+ services at any given time. Both in docker and k8s. Unless you enjoy configuring protection for each service independently, you'll have a bad time in authentik.
Authentik suffers from a debilitating bug[0] where when using a single config to protect all services on subdomains (i.e. app1.example.com, app2.example.com, etc.) your users will be randomly redirected to a different service when reauthenticating after the session expires.
Good to hear, I think it'll make many users happy. For me, I've migrated back to Authelia. I moved to authentik because at the time Authelia had no user management. After all of authentik's sharp edges, I've found lldap[0], and was able to implement a pilot in a few hours. I haven't looked back, since everything was converted.
Authentik has completely messed up their implementation of the oauth client credentials grant. It is not fixable without breaking changes and does not work with many tools using the cc grant.
After seeing this they were completely off the table for me.
authentik CTO here; we’ll fix this in the next release (match-april), it should be possible in a non backwards incompatible way using the suggestion in this comment https://github.com/goauthentik/authentik/issues/6139#issueco... (which does call that solution a hack but I wouldn’t necessarily agree)
One of the Authelia principle maintainers here. If there's anything we can do to help with the configuration of Angular we'd be more than happy to via the GitHub discussions.
The issue with Keycloak is that it’s been around a while and has gone through a ton of changes. While it started as a JBOSS project, its usefulness as an IdP shines in on-prem cluster auth. However, that’s where I would stop. I implemented Keycloak at scale on AWS ECS for a Fortune 500 department and it was unholy war for 1,000 years getting it to cluster properly. DNS discovery didn’t work right. Cluster discovery was over UDP (which didn’t work in our cloud environments). Stateful login on one server was missing from the others so dumb load balancing was off the table - sticky sessions was the only way. While it’s easy to docker run Keycloak and plug it in like Auth0, it’s like buying a 1996 Ford F-150 with 250,000 miles. It runs. It works. But Jesus is it a maintenance madonna.
I did it at quite small scale, but within an on-prem docker swarm. It was indeed a pain because if I remember correctly the default discovery uses multicast which is not enabled on typical cloud networks or on a Swarm/Kubernetes overlay network. I looked at database pings where they'd use your RDMS for a sort of quorum mechanism, but that seemed very brittle and I got the impression it was more of a last-resort type thing.
I was able to use the kubernetes cluster driver which uses the Swarm cluster's DNS for node discovery. It was indeed quite a pain to get working, but since then has been solid as far as I know. I believe there is also a native ec2 networking driver these days, but that is not something that I explored.
Keycloak is great but can be quite confusing for starters. It has a lot of features and not the best documentation. I recently tried Zitadel and found it much easier to use. It's a bit harder to self-host though because it can't run on an embedded database.
My boss recently called Keycloak "the gift that keeps on giving", but he was actually commenting on how there's a new ticket in jira for figuring out how the f*?k to do something.
Having said that, I have terraform that creates an EKS cluster, deploys Keycloak , creates clients (SAML/OIDC), adds external identity providers, sets up an AWS IAM Identity Provider for it etc. That makes it extremely easy to use once you've figured out (a) what it can do, (b) how to do it in the ui, (c) how to get mrparkers Terraform provider to do it for you.
haha, nope unfortunately. But I also use an odd method of keeping my terraform code dry via Hiera (yes, the Puppet thing). If you're interested I can find out if it's ok to open source it.
That would be really helpful. At the company I'm working for, we are transitioning to Keycloak, and one question that I have no answer for yet is how to standardize deployments across environments. Ideally, I would love to apply DevOps best practices, and try to script the provisioning of as many components as I can (clients, flows, etc.), avoiding config drift between environments. The only solution I found out for now is configuring the realm as I like and exporting it into JSON through the admin UI, placing the resulting file in the appropriate directory, and supplying the --import-realm flag at startup. That seems very fragile.
Ping my email, it's my username at Gmail. I'm happy to go through the wonky shite that I use. Be warned, I've wrapped a subset of Keycloak features that I use. But that includes realms, clients, identity providers, users, groups and a certain amount of extra stuff like client scope user attributes.
At a previous company we also used the exported JSON, and it's fine to spin up a reason, but horrible for ongoing admin.
After working with Keycloak for a couple of years I honestly got fed up with all it's quirks and started to look at alternatives, Authentik and many more looked promising but Zitadel[1] caught my eye and I've never looked back since.
I am using keycloak for quite a while. The main problem I have with it is that you can't get a link to reset a password, you have to issue an api that does it for you. In fact that is how most of the product goes. It is very opinionated.
Making it a cluster is also not easy, though I did it and it works ok.
Another issue is that the realms has a limit. Though you can spin up an instance every 200 realms but it is not for me... Instead, I just use it for login and do the roles internally in my app for every tenant<>user<>Role but then I get back to thinking, this was an overkill... but a login system that is secured is difficult and I don't want to deal with that...
I am complaining but what is the alternative? Do it all yourself? It is either too risky or too difficult...
> Forgot Password: If you enable it, users are able to reset their credentials if they forget their password or lose their OTP generator. Go to the Realm Settings left menu item, and click on the Login tab. Switch on the Forgot Password switch.
As I said, very opinionated. Meaning you have to use their way. So, for example, if I want to add turnstile bot protection to the reset password screen so your aws smtp won't be abused, I have to write a plugin instead of just getting the url to send myself.
A problem I've had trying to do this for local dev is that the DNS name of the Keycloak server is "keycloak" inside of the Docker network, but "localhost" from the the outside. The user's browser will be redirected to localhost (since it's outside of the Docker network) but then there is a mismatch between hosts (it expects "keycloak" not "localhost") when it comes to an API server verifying the token.
I haven't tried this, but could you modify your /etc/hosts file (or analogous file on whatever operating system) to have `keycloak` as a valid hostname on your local computer? So that both within the docker container and in your host computer, `keycloak` was the hostname of the keycloak server?
Some additional things that someone might find useful:
Keycloak supports separating token endpoints (the OAUTH2/OIDC URLS used for tokens) and the built-in login challenge pages from Keycloak's admin console and the REST API. In other words, you can offer the token endpoints and login service to the public and isolate all the admin and backend stuff to a different hostname and/or URL scheme. Have a look at the KC_HOSTNAME_ADMIN[_URL] config option(s).
Keycloak has a capability called Authorization Services, an implementation of User Managed Access (UMA2). It's opt-in and probably 99% of Keycloak users don't know it exists. It can be used to implement fine-grained authorization schemes on top of OAUTH.
I've found that the realms in keycloak doesn't scale quite as well as we would have wanted.
There is some kind of slowdown, and after around 200 realms things start breaking and startup time starts growing uncontrollably.
There's a good chance it's an issue in our setup (which is fairly complex), but every time we look at it we trace the slowdown back into keycloak itself
Just letting you know it's not you - this is a well known long standing issue with keycloak. Typically users see a significant performance cliff at around 300-400 realms. While one realm is not necessarily the same as one tenant in keycloak, it does make it a significantly larger headache to support multi tenant with SSO integrations in a single realm.
I'm afraid I can't give you more details than that, we just moved on from keycloak at that point.
For people not familiar with keycloak, keycloak realms are equivalent to what other auth providers call tenants (Auth0, FusionAuth [which I work for]) or user pools (Cognito). Basically a segmented set of users and configuration related to those users.
Love keycloak. What I would do differently though is run it on a host (or in a k8s pod) & have it serving via http to localhost, then use cloudflared to tunnel & present it as https. Saves messing around with certificates etc, it's all automatic.
I had this discussion before and thought long about using Caddy, but then decided for nginx, directly on the host, basically following this reasoning [1].
My main motivation is that there is just more information available for nginx due to its wider use. And when you need to customize Caddy due to different requirements by services, you end up with the same or worse complexity compared to an nginx .conf. Nginx is just very robust and the configuration is not so hard to get used to.
My go-to for anything more complicated is definitely NGINX, but for ease of use in a very straightforward docker containers running HTTP services, it’s hard to beat the ease of use of the modified version of Caddy I linked
After looking for a solution for my home lab for a while I ended up with Authentik. It's a comparatively new kid, but it was very easy to set up and the documentation is really excellent!
Thanks for the nod to the authentik documentation! That's my primary role at authentik... we are going to do some restructuring soon, with a goal to keep the current level of excellence while adding in all the new features coming up in our 2024.2.x release, plus add more How To (procedurals). We would appreciate any feedback, and of course we'd love any contributions to our docs in GitHub!
I got a hard time trying to integrate Spring Security 3.1 and Keycloak. All examples are based on @Deprecated connectors. Even in Spring Security there are a lot of huge changes ongoing.
In other words, I can protect arbitrary applications through my reverse proxy and require either certain claims/roles, or simplify auth to the point where my downstream app/API will just receive a bunch of headers like OIDC_CLAIM_sub, OIDC_CLAIM_name, OIDC_CLAIM_email through the internal network, not making me bother with configuring OIDC libraries for all of my APIs and configure them in each stack that I might use, but rather contain all of that complexity in the reverse proxy. Handling headers in pretty much every stack is trivial, be it something with .NET, Java, Node, Python and so on...
Basically:
user <==> Apache (with mod_auth_openidc) <==> API (with OIDC_ headers, if logged in and authorized)
OR <==> Keycloak (for logging in/registration, followed by redirect)
Apache probably isn't the ideal tool for this, but was definitely the easiest to setup and also has mod_md nowadays, making renewing SSL certs (with HTTP-01, at least) closer to how Caddy might do it, I don't even need certbot: https://httpd.apache.org/docs/trunk/mod/mod_md.html
Now, right now I'm moving away from storing groups in the Keycloak DB because it's actually easier to do that in the app DB and only allow Keycloak to reason about who users are, not what they can do (since interacting with Keycloak's API to manage users is cumbersome), especially when I want custom permissions: like users being assigned to companies, but also having specific permissions for each of those companies, your typical granular system.
Personally, I'd definitely say that OIDC feels overcomplicated to work with, so I'm glad that I discovered something like this. There's a whole list of other Relying Party implementations, but nobody really seems to be talking about them: https://openid.net/developers/certified-openid-connect-imple...
Also, in regards to Keycloak in particular, most likely you really want to build the "optimized" image, because otherwise the startup times will be annoyingly long: https://www.keycloak.org/server/containers (like the article already points out, but that was a pain point until I got it over with)
Oh, and Keycloak is known to be a bit odd in how it sometimes works behind a reverse proxy, I actually needed to put this in the configuration because otherwise connections would sometimes break:
> Now, right now I'm moving away from storing groups in the Keycloak DB because it's actually easier to do that in the app DB and only allow Keycloak to reason about who users are, not what they can do (since interacting with Keycloak's API to manage users is cumbersome), especially when I want custom permissions: like users being assigned to companies, but also having specific permissions for each of those companies, your typical granular system.
Have you thought about extracting your authorization to a separate server? It might be overkill if you only have one application, but I've seen demos/had conversations with the folks at cerbos and permit, which both offer open source authorization as a service. That way your user authentication is done by Keycloak, your authorization is done by cerbos/permit/etc, and your application only is responsible for application data/functionality.
Because when you run 10 services most of whom require at least one supporting service and for each service you need to specify at least 5 different options (volumes, resource limits, health check, depends on, mac address, IP address, etc); with just `docker run` you will have a mess. It's much simpler to have a docker-compose file per service with all of its dependent services.
The docker compose to plain docker run is the same as C/C++ to Assembly.
Keycloak works, but it's a behemoth and still needs further services to work with traefik forward auth.
Authelia works great, you don't get a UI to edit users, and it's not a two-way sync between a backing LDAP server, but the fact that it can be configured with a static file + environment variables makes it a great fit for a lot of cases. If you're just looking to add auth to some services, and SSO to those that can work with it, I'd suggest starting with Authelia.