Hacker News new | past | comments | ask | show | jobs | submit login
Keycloak SSO with Docker Compose and Nginx (nkel.dev)
222 points by Helmut10001 on Feb 11, 2024 | hide | past | favorite | 108 comments



I recently went down this road for my home lab and went with Authelia

Keycloak works, but it's a behemoth and still needs further services to work with traefik forward auth.

Authelia works great, you don't get a UI to edit users, and it's not a two-way sync between a backing LDAP server, but the fact that it can be configured with a static file + environment variables makes it a great fit for a lot of cases. If you're just looking to add auth to some services, and SSO to those that can work with it, I'd suggest starting with Authelia.


Disclosure: I work for FusionAuth.

If you are looking for SSO + easy configuration + an admin UI (which admittedly has a mid 2000s UX look and feel), you should check out FusionAuth. It's free to download and run yourself[0], docker friendly[1], has a variety of configuration choices[2] (including terraform[3] to manage your OIDC/other settings).

Worth noting that it is free to use, but not open source[4].

0: https://fusionauth.io/download

1: https://fusionauth.io/docs/get-started/download-and-install/...

2: https://fusionauth.io/docs/operate/deploy/configuration-mana...

3: https://fusionauth.io/docs/operate/deploy/terraform

4: https://fusionauth.io/license-faq#28


Genuinely curious, how do you plan to compete with Ory and Zitadel in this space without being open source?


Great question. Appreciate the interest.

I can only speak from my perspective as an employee, not the whole company. It is something I've thought about. I will also ask the CEO/founder or other leaders to weigh in.

Many devs care about open source when they are evaluating a solution, but many really want "free as in beer". They want to try a product without getting out the credit card or engaging with sales. We cater to the latter category, which wants to understand the product quality without talking to any sales people.

Some of these folks use our community product for their production needs, which is perfectly fine. We have people running FusionAuth in production with 1000s of tenants or 10000+ applications. (I always say they "pay" us by filing bug reports, giving feedback and voting on feature requests.)

But some decide they want to pay us for hosting, support or advanced features. Those choices help us build a business.

Devs, and especially buyers, are interested in sustainability of a product they are going to integrate into their system. An auth provider isn't a javascript widget that you can easily drop in or remove from your system. It embeds in your system, even if you stick to a standard like OIDC, and is difficult to switch from, especially at scale. You want to be sure the company and product is going to stick around. (If you want to make sure you can run the product even if everyone at FusionAuth wins the lottery, we do offer source code escrow for a price, but haven't had anyone take us up on it.)

FusionAuth is a profitable company (we did recently raise a round to accelerate growth, you can read more about that here [0]). Open source companies often have a hard time meeting the profit goals of the market or investors. This is a known issue and often results in relicensing or changing the rules, as we've seen with Hashicorp[1] and Elastic[2]. This switcheroo can cause issues, confusion, and ill-will; FusionAuth's licensing avoids that.

FusionAuth also develops in the open[3]. This open development process gives us another common benefit people get from OSS--community feedback.

Also, I don't want to throw shade at Ory and Zitadel, since I have no idea about their finances (apart from a brief look at Crunchbase, which shows they've raised 22.5M[4] and 2.5M[5] respectively). I hope they're building sustainable businesses, but selling closed source software is a sure route to a profitable business that has built many big companies (including in the auth space, such as Okta or Auth0). Again, this is not FUD (or at least I don't intend it to be!), just an honest assessment at the difficulties of making money in open source dev tools [6].

We also compete on features, support and documentation. Again, I can't speak to Ory or Zitadel; they look nice, but I haven't built anything with them, so it is hard for me to know how good they are. I do know that we have had many clients appreciate of those aspects of our product.

To sum up:

* FusionAuth has a free option, which helps reduce friction and gives some of the benefits of OSS. The open process and escrow also give some of the benefits of OSS.

* Some devs and buyers care about business sustainability, especially when integrating a tool deeply into their application. FusionAuth will never have to worry about relicensing a version because AWS is eating our SaaS revenue stream, for example.

* We offer great support, documentation and intricate auth features at a reasonable price.

Hope this helps.

0: https://fusionauth.io/blog/fusionauth-and-updata

1: https://www.hashicorp.com/license-faq

2: https://www.elastic.co/pricing/faq/licensing

3: https://github.com/FusionAuth/fusionauth-issues/issues/

4: https://www.crunchbase.com/organization/ory/company_overview...

5: https://www.crunchbase.com/organization/zitadel

6: I wrote about this a few years ago on my personal blog: https://www.mooreds.com/wordpress/archives/3438


>> It embeds in your system, even if you stick to a standard like OIDC, and is difficult to switch from, especially at scale.

This is precisely why a real, Open Source solution should be a top priority feature for anyone evaluating software like this. We have seen closed source solutions, time and time again, "realign" their business by dropping features, or even entire, free plans. In some cases, closed source software has been removed from the market all together when the company gets bought out. Changes in pricing plans leads to a worse fit for customers... etc. At least when Hashicorp and Elastic abandoned Open Source, the community had the option to say "yeah, good luck, thanks for the good times, but no thanks on your new direction" and fork the product to continue maintaining it. When a closed source products owners decide to "shift gears" on the product line up, customers are left between a rock and a hard place. Maybe if there were no Open Source options, but fortunately, there are several in the auth space.


I've been thinking lately how important it is that software be not only open source, but simple enough to be forkable. The fact that Keycloak is open source almost seems irrelevant given how massive the codebase is.

Browsers represent an extreme case.


Thanks for the detailed answer. If people are already taking advantage of your free offering, how would it be different if you released the source? If you're worried about another company piggybacking off your work to start a competitor, why not use one of these newfangled non-compete licenses? That still gives users the security of being able to for the project internally if you go under.


The free version of FusionAuth has limits beyond what other licenses have. See https://fusionauth.io/license-faq#3 for more details.

I'm afraid that's all the detail I have about the decision to keep FusionAuth closed source.


Sorry, on re-read, this sounds kinda abrupt. My bad.

I think at the end this is a business decision. The executive team has considered options and decided that closed source is the right path for the company.


No worries. Didn't seem abrupt.


How do I integrate this with a reverse proxy like Caddy and their forward_auth directive? I want to secure my apps on the proxy layer, not the app layer.


You'd have FusionAuth issue the tokens through an authentication event (typically the authorization code grant for interactive sessions). Lots of flows with sequence diagrams outlined here [0].

Then store the tokens on the client. For browsers dealing with first party apps (everything under the same domain) we recommend cookie storage of these tokens [1].

Then have Caddy examine the tokens provided by the browser. Here's an example Caddy config I put together for a workshop [2].

Finally, depending on your security posture, you might want to verify tokens both in app and at the proxy.

0: https://fusionauth.io/articles/login-authentication-workflow...

1: https://fusionauth.io/articles/oauth/oauth-token-storage

2: https://github.com/FusionAuth/fusionauth-example-php-api-wor...


Hey one of the Authelia developers here. Authelia works out of the box with Caddy's forward auth directive. In fact I helped Caddy develop and test the feature to ensure it had feature parity with an existing implementation/spec.

You have very granular options to customize the experience allowing multiple sites to have the same or different rules, which can alter the authentication requirements based on remote ip, users, groups, paths, domains, etc.


So if you self-host you still have to pay to have 2FA?


If you look for 2FA with otp, email, sms and also passkeys foe self-hosting, then give zitadel a spin. All features are included in the open source version. Should also work nicely with docker compose + nginx. In case you have issues, join the chat. https://github.com/zitadel/zitadel


TOTP 2FA is free. SMS and email MFA are part of the paid plans.

From https://fusionauth.io/docs/lifecycle/authenticate-users/mult...

> However, the Authenticator/TOTP implementation is not a premium feature.


I have Authelia running for 2+ years already. I configured it with LDAP using "LLDAP" [0], a lightweight LDAP implementation. I then use Caddy as a reverse proxy and integrate it [1] with Authelia. This works great. I have solid 2FA for all my services and I feel my self-hosted applications are secure enough to be accessed without VPN. My only concern is that Authelia hasn't had a new release for more than a year, which raises security concerns.

[0] https://github.com/lldap/lldap

[1] https://www.authelia.com/integration/proxies/caddy/


> My only concern is that Authelia hasn't had a new release for more than a year, which raises security concerns.

I'm a bit concerned about that too. When setting it up, I found a lot of their docs on github mentioned they have `template` and `expand-env` "configuration filters", then it took me entirely too long to realize that while the 4.38 pre-release notes, posted in January 2023, say it's "just around the corner", it's still being worked on.

Having said that, there still seems to be somewhat active development. It may just be one person at this point.

https://hub.docker.com/layers/authelia/authelia/v4.38.0-beta...

https://github.com/authelia/authelia/commits/v4.38.0-beta3/


It's me and two others though I'm definitely the most active. We put a lot of effort into security best practices and one of my co-developers is currently reviewing the 4.38.0 release. It's a fairly major release with a lot of important code paths that have been improved for the future.

Our official docs can be found at https://www.authelia.com and you can find docs for a particular PR in the relevant PR. We've also linked the pre-release docs in the pre-release discussions which can be found here: https://github.com/authelia/authelia/discussions/categories/...


Hey one of the Authelia developers here. We're very actively working on a very large release (it's just going through the peer review process and it should be good to go) and we currently have a pre-release for users to dive into.

I can understand the security concerns but we are regularly taking measures to ensure no zero-day vulnerabilities exist, there are no known vulnerabilities with Authelia at the present time either directly or via the code-paths of dependencies we actually use.


That's great to hear, I have no plans on moving away from Authelia. I love its simplicity.


Newest arch package is just over 4 months old - Last Updated: 2023-10-09 03:25 (UTC): https://aur.archlinux.org/packages/authelia


That's not a new release of authelia. Authelia's releases are at https://github.com/authelia/authelia/releases

The updates to the AUR package were not about new releases since 2022:

  aur/authelia $ git log ad4e6ca^..HEAD
  2c5029d (2023-10-09) Amir Zarrinkafsh ECDB8EF9E77E4EBF (HEAD -> authelia, origin/authelia) Fix frozek lockfile issue with pnpm
  246d77c (2023-01-22) Amir Zarrinkafsh ECDB8EF9E77E4EBF Utilise pnpm instead of yarn
  ad4e6ca (2022-12-21) Amir Zarrinkafsh Update to v4.37.5


I stand corrected, thanks!


If you'd like a newer build of the pre-release they are available. Feel free to reach out on GitHub Discussions (may not see it here but see how we go).


I also went down this road recently. Personally I had gone with caddy-security[1] which is simply a plugin for Caddy.

[1] https://github.com/greenpau/caddy-security


I also went down this road recently, and discovered caddy-security, but I have security concerns [0]. Software always has vulnerabilities, but this was enough to scare me off. Something like keycloak or authentia seems more tested and secure.

[0] https://blog.trailofbits.com/2023/09/18/security-flaws-in-an...


> Software always has vulnerabilities

Yeah, that's an unfortunate reality, but

> The caddy-security plugin maintainers confirmed that there were no near-term plans to act on the reported vulnerabilities.

Ouch. That's a red flag, thanks for pointing it out. I guess it's time to check out Authelia (I think that's what you meant by authentia?).


Authelia has a very unique advantage of its small footprint, which is something that neither Keycloak nor authentik (which I work for as a disclaimer) can really fit into. It’s a very good usecase for a homelab environment and if you don’t need/want features of the “bigger” solutions!


Tangential, but are there any alternatives to LDAP today? I googled around and found nothing. Simpler schema, JSON, http, etc. I can't believe nobody has attempted to recreate it when people have for basically everything else. AFAICT the only real other standard is Active Directory.

And secondly, is there a standard or protocol for offloading access control? I see Authelia allows you to contol access by url patterns, but I'd expect e.g. a fileserver to instead reach out to the LDAP server and check permissions based on the authenticated user id and keys in the database. This seems like the opposite of oauth2 which is for a server getting access itself to a 3rd party service.


> I [...] went with Authelia

Great choice!

> keycloak [...]'s a behemoth

Really? It prefers a database, sure, but you can also store on disk. And you can also configure the main user with env variables.

It starts within <3s on my toy server since they refractored it a few years ago and uses less then 100mb ram (again, toy server, not many users etc)

Idk, calling that a behemoth is kinda a stretch at that point...?

The thing that annoys me about keycloak is how they decided to ship it. I really don't want to maintain a CI Pipeline to deploy it .. but you're kinda forced to with how they've designed their docker image. Not an issue for enterprise as they're gonna be doing that anyway, but annoying for home servers


Behemoth may be a bit of a stretch, but the container is 20x the size of authelia, it's minimum recommendations are 512mb ram, and 1g of disk space. Compared to Authelia, which is using 30mb of ram and 500k disk space, yeah it's big.

Not to mention, only the admin user being configurable via environment variables isn't enough. With Authelia, I can share my homelab setup and with a couple of environment variable changes, people can have SSO integrated. There's no need to write guides or grab screenshots to help them get set up.


have you looked at the codebase? it's been a while but I was implementing Keycloak a few years ago and it was shocking how big the codebase is and how difficult it is to change things to add what felt like basic functionality. making plugins didn't seem like a viable option either.

oh not to mention the statefulnes of it, it was almost impossible to destroy and re create an instance from scratch without a bunch of manual point and click via the UI.


Keycloak solves a complex problem.

It is built on a plugin architecture, so plugins are certainly a viable option and this is documented in more detail here[0]. In general I have found the Keycloak docs thorough and well-written. When I operated Keycloak I built a few plugins to solve specific needs/assumptions we had around IdP when migrating to Keycloak from a bespoke solution.

Re: your second point, the docs also describe this in detail[1]. Having the realm data exist in a simple form that can be exported/imported was very useful. However, I would have liked if they thought more about how to do live backup/restore; perhaps that is easier now than it was at the time.

[0]: https://www.keycloak.org/docs/latest/server_development/inde... [1]: https://www.keycloak.org/server/importExport#_importing_a_re...


> Keycloak solves a complex problem.

A lot of problems, actually, and most people don't have many of them. If you just want an OIDC server in front of your self-hosted apps you can solve that with a much simpler and faster tool.


The docs can say whatever it wants, there were large parts of our configuration that wasn't included in an export, so we couldn't automate provisioning.


Had a similar experience and even filed a couple of bugs back then. I don't know the current state but back then it felt like having something that looked like halfway modern Java but still carried around large amounts of old school JEE cruft. Probably was the migration to quarkus though. So it probably got better?


Interesting... I had the exact opposite impression. The codebase is big but very easy to understand and their SPIs[1] enable to customise Keycloak's behaviour quite easily.

For the statefulness using terraform[2] solves the problem for me.

[1] - https://www.keycloak.org/docs/latest/server_development/#_pr... [2] - https://github.com/mrparkers/terraform-provider-keycloak


It would be great to know how you start Keycloak. In our CI it takes 40s-1m to start and consumes 500Mb. We use default docker image.


  > 2024-02-11 17:15:35,764 INFO  [io.quarkus] (Shutdown thread) Keycloak stopped in 0.064s
  > 2024-02-11 17:15:37,754 INFO  [org.keycloak.common.Profile] (main) Preview feature enabled: token_exchange
   ...
  > 2024-02-11 17:15:44,694 INFO  [org.infinispan.CLUSTER] (jgroups-8,a4c127cdee40-48683) ISPN100000: Node 72d48695e84c-46513 joined the cluster
so 7 seconds altogether on that restart, though a lot of that time is spend waiting for other nodes before it bootstraps the cluster. (its single node as its a toy server, as i said before)

this is the dockerfile: https://pastebin.com/rVdXjUkP

which is basically just the officially documented way to make the image, albeit set to use a reverse proxy for https

https://www.keycloak.org/server/containers

your CI pipeline probably downloads the runtime dependencies each time from the internet, as building this image does indeed take around ~40s.

the resulting image is ~620MB.

As I said earlier, making this image has become basically mandatory with their switch to Quarkus. You should really address that ;)


Recently I looked into having a relatively simple SSO setup for my homelab. My main objective is that I could easily login with Google or GitHub auth. At my previous job I used both JetBrains Hub [1] and Keycloak but I found both of them a bit of a PITA to setup.

JetBrains Hub was really, really easy to get going. As was my previous experience with them. The only thing that annoyed me was the lack of a latest tag on their Docker registry. Don't get me wrong, pinned versions are great, but for my personal use I mostly just want to update all my Docker containers in one go.

On the other hand I found Keycloak very cumbersome to get going. It was pretty easy in dev mode, but I stumbled to get it going in production. AFAIK it had something to do with the wildcard Let's Encrypt cert that I tried to use. But after a couple of hours, I just gave up.

I finally went with Dex [2]. I had previously put it off because of the lack of documentation, but in the end it was extremely easy to setup. It just required some basic YAML, a SQLite database and a (sub)domain. I combined Dex with the excellent OAuth2 Proxy and a custom Nginx (Proxy Manager) template for an easy two line SSO configuration on all of my internal services. I also created an Dex Docker template for unRAID [4].

In addition to this setup, I also added Cloudflare Access and WAF outside of my home to add some security. I only want to add some CrowdSec to get a little more insights.

1. https://www.jetbrains.com/hub/

2. https://dexidp.io/

3. https://github.com/oauth2-proxy/oauth2-proxy

4. https://github.com/alex3305/unraid-docker-templates


I use obligator with ephemeral storage, no db, 100% code driven setup.

In my opinion this is the simplist option.

https://github.com/lastlogin-io/obligator


Great addition. I remember that I also looked at Obligator and saved it to my bookmarks. But I decided against it because IMHO the project was just a bit too young. Normally I tend to ignore that, but I really didn't want to switch auth/SSO solutions in a couple of months time because of a lack of maintenance or something like that.


What features does oauth2-proxy provide that Dex is missing?


Dex only acts as a federated identity provider. Unlike oauth2-proxy which acts as a service provider for services that don't have authentication themselves.


Ah so no forward auth?


AFAIK no. Dex only seem to act as a federated IdP.


I made a (incomplete) comparison table of various self-hostable OpenID Connect servers[0].

One of the surprising things to me is just how massive the Keycloak codebase is.

[0]: https://github.com/lastlogin-io/obligator?tab=readme-ov-file...


The idea that Keycloak solves security issues makes me giggle. It's CVE's should shed some light on what I mean.

- https://www.cvedetails.com/vulnerability-list/vendor_id-25/p...


It's a double edged sword. A closed source non-transparent solution without a reported cve is also laughable as "secure".

Also the most recent release 22.0.2 only has 3 (known) vulnerabilities. https://www.cvedetails.com/vulnerability-list/vendor_id-25/p...


most recent release is 23.0.6


If nothing else, that posts mention of https://www.keycloakify.dev/ is very handy - looks like a great alternative to the standard approach!


It is a great project. I’m using it since two years and can only say good things. The maintainer is very responsive and most recently a change was made to both keycloak and keycloakify to future proof its approach to theming.

Currently it officially supports only apps made with create-react-app, but there is a vite branch in development and personally we use it with vite since a while.


Ooh, thanks for that. I work for a competitor and we're reworking our theming and this seems like a great project to review and model the rework after.


I've been eyeing authentik[1] and authelia[2].

Authelia looks really good to me, but the fact that keycloak has connectors for angular and you need to setup oidc angular plugins with authelia for example made me a little bit wary. But I guess having a config for Keycloak makes it's easier to get started.

[1] https://goauthentik.io/

[2] https://www.authelia.com/


For anyone, considering authentik, I want to warn you by saying "here be dragons."

To start, I have protected 10+ services at any given time. Both in docker and k8s. Unless you enjoy configuring protection for each service independently, you'll have a bad time in authentik.

Authentik suffers from a debilitating bug[0] where when using a single config to protect all services on subdomains (i.e. app1.example.com, app2.example.com, etc.) your users will be randomly redirected to a different service when reauthenticating after the session expires.

[0]: https://github.com/goauthentik/authentik/issues/6886


Hey, authentik CTO here!

We’ll be addressing the bug in the release after the next one (march-April)


Good to hear, I think it'll make many users happy. For me, I've migrated back to Authelia. I moved to authentik because at the time Authelia had no user management. After all of authentik's sharp edges, I've found lldap[0], and was able to implement a pilot in a few hours. I haven't looked back, since everything was converted.

[0]: https://github.com/lldap/lldap


Authentik has completely messed up their implementation of the oauth client credentials grant. It is not fixable without breaking changes and does not work with many tools using the cc grant.

After seeing this they were completely off the table for me.

https://github.com/goauthentik/authentik/issues/6139


See here for the fix, which both implements the workaround suggested in the issue and also a much more standard-compliant method: https://github.com/goauthentik/authentik/pull/8471


authentik CTO here; we’ll fix this in the next release (match-april), it should be possible in a non backwards incompatible way using the suggestion in this comment https://github.com/goauthentik/authentik/issues/6139#issueco... (which does call that solution a hack but I wouldn’t necessarily agree)


One of the Authelia principle maintainers here. If there's anything we can do to help with the configuration of Angular we'd be more than happy to via the GitHub discussions.


Authentik dev here, AMA


I am trying to make the same decision right now. Authentik looked better to me but that bug mentioned in another reply sounds bad


The issue with Keycloak is that it’s been around a while and has gone through a ton of changes. While it started as a JBOSS project, its usefulness as an IdP shines in on-prem cluster auth. However, that’s where I would stop. I implemented Keycloak at scale on AWS ECS for a Fortune 500 department and it was unholy war for 1,000 years getting it to cluster properly. DNS discovery didn’t work right. Cluster discovery was over UDP (which didn’t work in our cloud environments). Stateful login on one server was missing from the others so dumb load balancing was off the table - sticky sessions was the only way. While it’s easy to docker run Keycloak and plug it in like Auth0, it’s like buying a 1996 Ford F-150 with 250,000 miles. It runs. It works. But Jesus is it a maintenance madonna.


I did it at quite small scale, but within an on-prem docker swarm. It was indeed a pain because if I remember correctly the default discovery uses multicast which is not enabled on typical cloud networks or on a Swarm/Kubernetes overlay network. I looked at database pings where they'd use your RDMS for a sort of quorum mechanism, but that seemed very brittle and I got the impression it was more of a last-resort type thing.

I was able to use the kubernetes cluster driver which uses the Swarm cluster's DNS for node discovery. It was indeed quite a pain to get working, but since then has been solid as far as I know. I believe there is also a native ec2 networking driver these days, but that is not something that I explored.


I've heard there have been improvements since it fully moved to quarkus. When did you do that deployment to ECS?


2017-2019, by 2020 we moved to another IdP middleman.


OAuth with OpenID Connect and being able to add apps and clients to a realm was its saving grace and the only reason we kept it around.


> Cluster discovery was over UDP (which didn’t work in our cloud environments)

Ouch, we are only allowed UDP for DNS.


Keycloak is great but can be quite confusing for starters. It has a lot of features and not the best documentation. I recently tried Zitadel and found it much easier to use. It's a bit harder to self-host though because it can't run on an embedded database.


That's true it requires its own db. Way forward will be postgres as db, which makes it hopefully easier to deploy alongside other tools.


My boss recently called Keycloak "the gift that keeps on giving", but he was actually commenting on how there's a new ticket in jira for figuring out how the f*?k to do something.

Having said that, I have terraform that creates an EKS cluster, deploys Keycloak , creates clients (SAML/OIDC), adds external identity providers, sets up an AWS IAM Identity Provider for it etc. That makes it extremely easy to use once you've figured out (a) what it can do, (b) how to do it in the ui, (c) how to get mrparkers Terraform provider to do it for you.


By any chance, is that Terraform open sourced somewhere? It's for a friend :)


haha, nope unfortunately. But I also use an odd method of keeping my terraform code dry via Hiera (yes, the Puppet thing). If you're interested I can find out if it's ok to open source it.


That would be really helpful. At the company I'm working for, we are transitioning to Keycloak, and one question that I have no answer for yet is how to standardize deployments across environments. Ideally, I would love to apply DevOps best practices, and try to script the provisioning of as many components as I can (clients, flows, etc.), avoiding config drift between environments. The only solution I found out for now is configuring the realm as I like and exporting it into JSON through the admin UI, placing the resulting file in the appropriate directory, and supplying the --import-realm flag at startup. That seems very fragile.


Ping my email, it's my username at Gmail. I'm happy to go through the wonky shite that I use. Be warned, I've wrapped a subset of Keycloak features that I use. But that includes realms, clients, identity providers, users, groups and a certain amount of extra stuff like client scope user attributes.

At a previous company we also used the exported JSON, and it's fine to spin up a reason, but horrible for ongoing admin.


After working with Keycloak for a couple of years I honestly got fed up with all it's quirks and started to look at alternatives, Authentik and many more looked promising but Zitadel[1] caught my eye and I've never looked back since.

[1] https://zitadel.com/blog/zitadel-vs-keycloak


Nice to hear. Glad you like it. What was the winning thing for you?


I am using keycloak for quite a while. The main problem I have with it is that you can't get a link to reset a password, you have to issue an api that does it for you. In fact that is how most of the product goes. It is very opinionated. Making it a cluster is also not easy, though I did it and it works ok. Another issue is that the realms has a limit. Though you can spin up an instance every 200 realms but it is not for me... Instead, I just use it for login and do the roles internally in my app for every tenant<>user<>Role but then I get back to thinking, this was an overkill... but a login system that is secured is difficult and I don't want to deal with that...

I am complaining but what is the alternative? Do it all yourself? It is either too risky or too difficult...


Is this really true? Reading [1] they say:

> Forgot Password: If you enable it, users are able to reset their credentials if they forget their password or lose their OTP generator. Go to the Realm Settings left menu item, and click on the Login tab. Switch on the Forgot Password switch.

[1]: https://wjw465150.gitbooks.io/keycloak-documentation/content...


As I said, very opinionated. Meaning you have to use their way. So, for example, if I want to add turnstile bot protection to the reset password screen so your aws smtp won't be abused, I have to write a plugin instead of just getting the url to send myself.


SSO is Single Sign On for those as lost as me. SSO allows a user to log in one time and then access multiple services securely.


A problem I've had trying to do this for local dev is that the DNS name of the Keycloak server is "keycloak" inside of the Docker network, but "localhost" from the the outside. The user's browser will be redirected to localhost (since it's outside of the Docker network) but then there is a mismatch between hosts (it expects "keycloak" not "localhost") when it comes to an API server verifying the token.

Anyone figured this out?


I haven't tried this, but could you modify your /etc/hosts file (or analogous file on whatever operating system) to have `keycloak` as a valid hostname on your local computer? So that both within the docker container and in your host computer, `keycloak` was the hostname of the keycloak server?


Always an option, but I dislike changing global settings on the system. It makes it harder to hop between projects and it's slower when onboarding.


You need to configure the "client" in keyclock that corresponds with you front-end.


You don't need a bazillion keycloak instances, it has realms.

1 keycloak instance can have many realms.

I won't get into the nonsense of having a single database server per any instance (not limited to keycloak). Such a waste of resources.

You can have a system, postgres in this case, and make it listen on the docker iface, set it up to launch after and require docker.service.

The nginx conf was interesting and I'll adjust mine.


Some additional things that someone might find useful:

Keycloak supports separating token endpoints (the OAUTH2/OIDC URLS used for tokens) and the built-in login challenge pages from Keycloak's admin console and the REST API. In other words, you can offer the token endpoints and login service to the public and isolate all the admin and backend stuff to a different hostname and/or URL scheme. Have a look at the KC_HOSTNAME_ADMIN[_URL] config option(s).

Keycloak has a capability called Authorization Services, an implementation of User Managed Access (UMA2). It's opt-in and probably 99% of Keycloak users don't know it exists. It can be used to implement fine-grained authorization schemes on top of OAUTH.


I've found that the realms in keycloak doesn't scale quite as well as we would have wanted.

There is some kind of slowdown, and after around 200 realms things start breaking and startup time starts growing uncontrollably.

There's a good chance it's an issue in our setup (which is fairly complex), but every time we look at it we trace the slowdown back into keycloak itself


Just letting you know it's not you - this is a well known long standing issue with keycloak. Typically users see a significant performance cliff at around 300-400 realms. While one realm is not necessarily the same as one tenant in keycloak, it does make it a significantly larger headache to support multi tenant with SSO integrations in a single realm.

I'm afraid I can't give you more details than that, we just moved on from keycloak at that point.


> 1 keycloak instance can have many realms.

For people not familiar with keycloak, keycloak realms are equivalent to what other auth providers call tenants (Auth0, FusionAuth [which I work for]) or user pools (Cognito). Basically a segmented set of users and configuration related to those users.


Love keycloak. What I would do differently though is run it on a host (or in a k8s pod) & have it serving via http to localhost, then use cloudflared to tunnel & present it as https. Saves messing around with certificates etc, it's all automatic.


My go to is always this instead:

https://github.com/lucaslorentz/caddy-docker-proxy

Single label to a docker container and with correct DNS you’ll have an automatically managed certificate right away.


I had this discussion before and thought long about using Caddy, but then decided for nginx, directly on the host, basically following this reasoning [1].

My main motivation is that there is just more information available for nginx due to its wider use. And when you need to customize Caddy due to different requirements by services, you end up with the same or worse complexity compared to an nginx .conf. Nginx is just very robust and the configuration is not so hard to get used to.

[1]: https://nickjanetakis.com/blog/why-i-prefer-running-nginx-on...


Very reasonable!

My go-to for anything more complicated is definitely NGINX, but for ease of use in a very straightforward docker containers running HTTP services, it’s hard to beat the ease of use of the modified version of Caddy I linked


After looking for a solution for my home lab for a while I ended up with Authentik. It's a comparatively new kid, but it was very easy to set up and the documentation is really excellent!


Thanks for the nod to the authentik documentation! That's my primary role at authentik... we are going to do some restructuring soon, with a goal to keep the current level of excellence while adding in all the new features coming up in our 2024.2.x release, plus add more How To (procedurals). We would appreciate any feedback, and of course we'd love any contributions to our docs in GitHub!


I got a hard time trying to integrate Spring Security 3.1 and Keycloak. All examples are based on @Deprecated connectors. Even in Spring Security there are a lot of huge changes ongoing.


I did something similar, though picked Apache with mod_auth_openidc, which is a certified Relying Party implementation: https://github.com/OpenIDC/mod_auth_openidc

In other words, I can protect arbitrary applications through my reverse proxy and require either certain claims/roles, or simplify auth to the point where my downstream app/API will just receive a bunch of headers like OIDC_CLAIM_sub, OIDC_CLAIM_name, OIDC_CLAIM_email through the internal network, not making me bother with configuring OIDC libraries for all of my APIs and configure them in each stack that I might use, but rather contain all of that complexity in the reverse proxy. Handling headers in pretty much every stack is trivial, be it something with .NET, Java, Node, Python and so on...

Basically:

  user <==> Apache (with mod_auth_openidc) <==> API (with OIDC_ headers, if logged in and authorized)
                                        OR <==> Keycloak (for logging in/registration, followed by redirect)
Apache probably isn't the ideal tool for this, but was definitely the easiest to setup and also has mod_md nowadays, making renewing SSL certs (with HTTP-01, at least) closer to how Caddy might do it, I don't even need certbot: https://httpd.apache.org/docs/trunk/mod/mod_md.html

Now, right now I'm moving away from storing groups in the Keycloak DB because it's actually easier to do that in the app DB and only allow Keycloak to reason about who users are, not what they can do (since interacting with Keycloak's API to manage users is cumbersome), especially when I want custom permissions: like users being assigned to companies, but also having specific permissions for each of those companies, your typical granular system.

Personally, I'd definitely say that OIDC feels overcomplicated to work with, so I'm glad that I discovered something like this. There's a whole list of other Relying Party implementations, but nobody really seems to be talking about them: https://openid.net/developers/certified-openid-connect-imple...

Also, in regards to Keycloak in particular, most likely you really want to build the "optimized" image, because otherwise the startup times will be annoyingly long: https://www.keycloak.org/server/containers (like the article already points out, but that was a pain point until I got it over with)

Oh, and Keycloak is known to be a bit odd in how it sometimes works behind a reverse proxy, I actually needed to put this in the configuration because otherwise connections would sometimes break:

  SetEnv proxy-initial-not-pooled 1
  SetEnv proxy-nokeepalive 1


> Now, right now I'm moving away from storing groups in the Keycloak DB because it's actually easier to do that in the app DB and only allow Keycloak to reason about who users are, not what they can do (since interacting with Keycloak's API to manage users is cumbersome), especially when I want custom permissions: like users being assigned to companies, but also having specific permissions for each of those companies, your typical granular system.

Have you thought about extracting your authorization to a separate server? It might be overkill if you only have one application, but I've seen demos/had conversations with the folks at cerbos and permit, which both offer open source authorization as a service. That way your user authentication is done by Keycloak, your authorization is done by cerbos/permit/etc, and your application only is responsible for application data/functionality.


This is interesting.

Thanks for sharing.


Why is `docker run` "unusual in production"? I prefer it over `docker-compose` because it eliminates one mostly useless layer of abstraction.


Because when you run 10 services most of whom require at least one supporting service and for each service you need to specify at least 5 different options (volumes, resource limits, health check, depends on, mac address, IP address, etc); with just `docker run` you will have a mess. It's much simpler to have a docker-compose file per service with all of its dependent services.

The docker compose to plain docker run is the same as C/C++ to Assembly.


Just one reason for me is that I can add the docker-compose.yml to git to track changes over time and have a more reproducible setup.


Anyone using teleport?

It’s not exactly a reverse proxy, but it does add a layer of authentication to web applications, ssh, and remote desktop.


FYI, SSO=Single Sign-On


For your information, FYI=For your information




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: