Thanks for pointing that out. I definitely get that it's possible, but as far as I know in the open source world there isn't much in terms of infrastructure to implement these types of solutions in web applications.
Happy to be proven wrong, I'm just unaware if any popular open source HTTPS servers offer this as an integrated solution.
Or better yet, I'd like raw access to the certificate info FROM the application layer on server-side so I can manage that as needed.
As for handling the TLS part, both Apache and NGINX have the capability to do verification of certificates. The application only needs to parse the headers they pass on to determine the user connecting to the application.
If you don't care about the user (TLS client certs + standard username/password) you can get away with proxying the application through nginx and calling it a day.
Basically you turn on client verification and you're done. If you want to show an error to unauthenticated users, you can make verification optional and add something along the lines of:
if ($ssl_client_verify != SUCCESS) {
return 403;
}
nginx can do the client cert verification for you and pass the results as http headers to the application. so all you have to do in the application is to add some request authentication interceptor that inspects the headers.
Right. I may have done a poor job explaining. Imagine if the HTTPS server did all its verification / etc at the protocol level, but then EXPOSED the public key, used by the client, to the application I wrote. This way I can (at the application level) do app-related stuff like reject users if (for example) they've provided a public key not in my white-listed public keys. This would also make it seamless to build tooling around the application, such as what github (and others) do when they ask you to maintain public keys you may use when pushing / pulling to/from a repo.
But in this case users could provide public keys they will use when accessing the website from internet (as opposed to intranet).
I think most open source stuff supports client certs pretty well, but the issue is getting them to end users. I personally use mTLS as a two part authentication/authorization system; services prove their identity to each other with certificates, humans prove their identity to a proxy server that generates a bearer token for the lifetime of the request. Then each application sees (source application, user) and can make an authorization decision.
I personally use Envoy as the proxy and cert-manager to manage certificates internally. You can peruse my production environment config for my personal projects at https://github.com/jrockway/jrock.us (dunno if that's the real link, my ISP broke routes to github tonight, but it's something like that).
The flow is basically:
1) At application installation time, a cert is provisioned via cert-manager. Each application gets a one-word subject alternate name that is its network identity. The application is configured to use this cert; requiring incoming connections to present a client certificate that validates against the CA, and making outgoing connections with its own certificate. (This integrates nicely with things like Postgres, that expect exactly this sort of setup.) This lets pure service-to-service communication securely validate the other side of the connection. This is nice because, in theory, I don't have to configure each application with a Postgres password, Postgres can just validate the client cert and grant privileges based on that. (I have not set this up yet, however.) I also like the ability to reliably detect misconfiguration; if you misconfigure a DNS record, instead of making requests to the wrong server, the connection just breaks. Saves you from a lot of debugging. And, of course, if the NSA is wiretapping your internal network, they don't get to observe the actual traffic. (But probably compromised your control plane too, so it's all pointless.)
2) The other half is letting things outside of the cluster make requests to things inside the cluster. I use an Envoy proxy in the middle; this terminates the end user's TLS connection, and routes requests to the desired backend, like every HTTPS reverse proxy ever. I wrote a "control plane" that automates most of the mTLS stuff (it's production/ekglue in the repository; ekglue is an open-source project that is agnostic to mTLS, my configuration adds it for my setup). At this point, users outside of the cluster will see a valid jrock.us cert, so they know they've gone to the right site, and applications inside the cluster will see that traffic is coming from the proxy, and can decide how they want to trust that. Right now, everything I run in my cluster just passes through to its native authentication, so it's pretty pointless, but the hook exists for future applications that care.
3) For applications that want a known human user (or human-authorized outside service, think dashboards or webhooks), I wrote an Envoy ext_authz plugin that exchanges cookies or bearer tokens for an internal request-scoped time-limited access token. Applications can then validate this token without calling out to a third-party service, so no latency is introduced. (They do have to be configured to do this, and the state here in the open source world is pretty abysmal. OIDC is helping, and it's trivial to write it into your own application framework. A few applications will just accept an x-remote-user HTTP header, which I found to be adequate, especially if they can trust the proxy with mTLS. Compromising the proxy lets you compromise all upstream apps, though, so I'm looking for a new design.)
I actually wrote this at my last job and don't have the code (it's theirs)... but am slowly rebuilding it in my spare time. Second system syndrome is a bitch. You can follow along at my jsso repository on Github, but it is not ready to be used and I think that most of the stuff I wrote in the design document there is going to change ;)
Anyway, where I'm going with all this is... all the pieces exist to make yourself a secure and reliable production environment. mTLS is pretty straightforward these days, and in addition to the easy route of just doing it yourself, a bunch of frameworks exist to let you get even more security (SPIFFE/Spire, Istio, etc.) For authenticating human users, most of the work has been done in the closed source world; Okta, Duo, Google's Identity Aware Proxy, etc.