It's incredibly, painfully difficult. If you search for "Python Okta" you get this: https://developer.okta.com/code/python/
> At this time we do not support official API client libraries (SDKs) for Python. You may fork our legacy Python SDK or join the conversation on this thread and let us know how you’d like to use Okta from Python applications.
I have integrated both OpenID and OAuth by hand many, many times. If Okta offered a page of documentation like this one my project would already be finished: https://developer.github.com/apps/building-oauth-apps/author...
From the linked article:
> This is one of the reasons why, here at Okta, even though our entire platform is built on top of OAuth and OIDC, we spend tons of time and effort trying to build abstractions (in the form of client libraries) to hide those complexities and make securing your web applications simpler.
But this is the cause of my problem! The abstractions they are offering don't work for my use-case. The fact that they have hidden the complexity from me is actively preventing me from completing my goal.
I'm not arguing against abstractions here - what I need is BOTH. I need abstractions that can help me get my job done, combined with well documented non-hidden complexity for me to fall back on if the abstractions don't yet handle my use-case.
I was very impressed by their rules concept  that allowed me to create quite non-trivial authentication scheme in one day from scratch  They also pay attention to development workflows by having great logs and debugging tool chain as a part of their service, very cool!
Also, try implementing generation of API-key like tokens for users that can be expired, revoked, authorized etc.
Auth0 have made products that hide complexity, in my opinion, but if those products fit your particular need, sure it works great.
We don't have any full-time Pythonistas on staff right now, so deprecated our old stuff. We're hoping to hire and expand that role out to build proper SDKs + support in the future.
Right now, the best option is to use generic OIDC/OAuth compliant libraries (since Okta is a generic OAuth/OIDC provider). Sorry :(
I am expecting this quarter to start building out a platform that will use Okta for authentication and authorization.
Do you think Okta would be willing to pay for some engineering hours if we were to take on the mantle of producing a modern Python SDK? If so, I'll be happy to start running that up the line here.
Failing that, are there any resources you recommend for going forward with Python and Okta, in addition to using generic OAuth libraries?
My team did use Okta React briefly for an internal project but we couldn't get it to play nicely with Cypress (e2e testing) so we ended up removing it sadly.
I'm pretty sure we had just implemented it incorrectly though since we were still getting to grips with React at the time (as a team that previously had little to no combined UI experience) :)
Barring that, Okta at least is a conformant OIDC implementation and you can use standard libraries for it.
The community for FusionAuth is growing quickly and it has an open issue tracker (https://github.com/FusionAuth/fusionauth-issues), good docs (https://fusionauth.io/docs/v1/tech/), and many open source projects as well (https://github.com/FusionAuth).
I'm one of the developers of FusionAuth, so you should give it a try and lot us know what you think.
I'm curious why the need for Elastic. You already have a database which can be used to search all the user objects. What does Elastic bring to the party other then an extra ops burden?
This is not necessarily a criticism, just trying to understand what went into that architectural decision.
While Elastic can be a bit cumbersome, we have made bundles that make it simpler to deploy and manage. We have also worked hard to secure it.
I've seen a similar situation with payment processors, where they have SDKs in a variety of languages, but not the one I'm using (our back end is in Perl), so I use the underlying low level API.
Amusingly, it often happens that I look at their Java or PHP SDKs, which involve seemingly a bewildering number of classes and methods, and find that just using their underlying REST API directly is massively simpler, clearer, and takes less code, to the point I don't use their SDKs even if I am working in a language they have an SDK for.
But payment processing is pretty simple. Third party authentication is probably more complicated, and so it could be that the SDKs are doing a lot more than just providing a fairly straightforward wrapper for the underlying low level API.
 https://developer.okta.com/authentication-guide/saml-login/ ; https://gravitational.com/teleport/docs/ssh_okta/
Yes, you don't want people rolling their own crypto libraries. But if nothing has been rolled yet for your environment by anyone? You need a standardized open protocol that you have at least a chance of rolling correct crypto against.
Closing that protocol up in a nice shiny (but exclusively-owned-and-maintained) box does you no good if the box can't be bent to fit your use case.
Except this is Hacker News, where caring is fundamental. I'll pass on the Okta advertisement.
This topic is near and dear to my heart because I've been working in this industry for many years now, and it is frustrating that there is a huge focus on building more OAuth/OIDC tools and encouraging developers to get directly involved in working with things like JWTs directly.
There are so many ways to mess things up at foundational levels today, I just really want to see better tooling created in the open source communities (and in paid products) to abstract things like OAuth/OIDC so that developers don't need to constantly be fiddling around with these lower level protocols where the risk for messing up is extremely high.
I have a ton of respect for Todd but yes, this writer is just doing their job which is to promote okta
Maybe it's aimed at management?
It may be the lightest of plugs, but it means that the writer is biased. Being biased doesn't mean you're wrong, but it throws the entire argument into doubt. I feel like I wasted my time.
Also, any security professional who just mentions in passing that OAuth was for authorization loses a little of my trust. It's true that Auth is short for Authorization. But an important nuance is that it isn't authorization for the user but for the application, authorization by the user for this new app to get some information from one of the user's old apps (like Google). For a programmer like me, this clears up why OAuth stands for authorization but always seemed more like authentication. From my understanding, OAuth doesn't handle any authorization of the user within your app. You have to handle that some other way.
A lot of people just don’t have the background to do good security work, and when you try to explain why a short lived credential is good, or why we can’t have a plain text password embedded in the git repo, their eyes glaze over and you lose them. So it’s much easier to just dictate it will be a tls connection and you’ll use OAUTH, and then they go download a library and everything is good.
Where I ran into problems is with using OAUTH for federated user logins. I did a proof of concept with Facebook, then after a couple hours they decided I was a robot and that was that - account closed, no recourse. Luckily that happened before I had a large user base using it - I don’t know what you do if Facebook decides your a robot and suddenly 50,000 people can’t authenticate to your site, guess you learn something. A week later most of the other sites decided I was a robot and terminated my accounts. Google is the only site that didn’t. (Which is good because I have a decade of e-mail and photos I’d hate to lose)
This is exactly why I am paranoid about depending entirely on social sign up/in. So if the API provides it, I believe it's a good idea to capture the email address associated with the social account and either generate a hard password for later reset if necessary or request the user set one up. The latter obviously reduces a little convenience of the social sign up, but does hint to the user that they can sign-in otherwise and gives them control.
Authentication is just too important to completely outsource without recourse.
I agree 100%. You need to own your users and all of their data. Along those same lines, I wouldn't want my user data going into a multi-tenant, cloud-hosted solution like Okta. I'd much prefer to store everything on my own servers or use an on-premise solution like FusionAuth or KeyCloak.
I see what you did there. :)
I recently decided against using jwt bearer tokens due to the concerns expressed in both rfcs and owasp of being to risky when compared to session cookies.
But really I’m wondering if the risk of screwing up csrf protection isn’t more of a concern than token leaks.
I kept thinking that for our use case, a simpler aead token or perhaps something like Maracoons (if there would be more battle-tested libraries for it...) would suffice. In the end I implemented a partial solution with one flow and one JWT signing algorithm so the other services could rely on existing client libraries for their respective programming languages.
Of course none of it ended up being used, as SAML (there's another can of worms...) support became a requirement and some large framework was brought in for that corporate software feel-good factor...
I agree with the sentiment that it would be nice if there would be new development in this area, even (or especially) if none of the big players are involved, but non-trivial crypto protocols and frameworks seem to need a lot of momentum to break out of their local ecosystems. It's not easy to design something that fits different use cases yet doesn't break at the joints.
This was my experience doing _anything_ at all with OAuth. Now I am seeing the same with OpenID Connect, though not as severe. The spec was narrowed a bit, but it is still way too broad.
Also you may want to look at the OIDC conformance testing tools for testing if you are building your own library or server. Certification costs money, but the conformance testing tools I believe are all open source.
Never building this infrastructure again. What a huge time saver.
My company used Okta. We ended up cancelling it as it did not seem to offer any value whatsoever and was crazy expensive for our 300-odd users.
I looked into using Okta or Auth0 for this sort of setup, but both were prohibitively expensive for a saas app.
And it integrates with AAD and ADFS.
When you canceled Okta, did you build it in house, or was the AWS Cognito / AWS solution the replacement?
Did you look at anything on-prem like Keycloak or FusionAuth?
One specific problem I've faced is that it's very complicated to configure a development environment for unit and integration testing. I think there is huge demand for something that works well for Docker Compose and unit testing frameworks while still being a production grade solution.
I haven't evaluated everything out there but so far the best solution I've found is Keycloak. Unfortunately it takes 20 seconds to start on my computer and up until very recently I had to go through the configuration procedure each time you launched tests. I was fortunate enough to get a PR (https://github.com/jboss-dockerfiles/keycloak/pull/152) accepted which allowed me to load a working configuration file directly into the unaltered Docker container with some users and roles for testing purposes.
So if you have something that starts quickly, is easy to combine with Docker Compose and with unit testing frameworks, and could be used in a production scenario... Let me know. ;-)
Oh, and it also needs to be on-premise.
If on-premise is a requirement, you may want to look at FusionAuth. On premise, runs on Mac, Linux, Windows, Docker, and Kubernetes. This should check all of your boxes, installs in a few moments with the fast path install or docker-compose up. https://fusionauth.io/
No need to store a password and correctly crypt the hash, provide a password change function, provide a password reset function (what a PITA to get correct).
Why hasn't anybody created something comparable yet?
OpenId Connect and OAuth work great and are reasonably designed. The documentation is dense but you'll do fine if you read some of the popular blogs about it summarizing the different flavors.
Their comparison to "rolling your own crypto" is jarring. Rolling != using. You're likely a moron if you write your own crypto library and that applies to not using pre-made OAuth/OIDC libraries as well.
Our web services are all running under the same base domain reactos.org and we wanted a Single Sign-On system for all of them. I was surprised to find out that doing this simple seems to be an unresolved problem: CAS, OpenID Connect, and SAML all want you to set up heavyweight authentication servers and a certificate infrastructure for identifying each participating web service. A lot of protocol messages need to travel for a simple action like a user login when a site-wide session cookie could just do the same job. Sure, those systems support advanced features like access control and delegated authentication, but this is all not required if you just want to link a few web services under your own control, say a MediaWiki and phpBB forums.
RosLogin simply sets a site-wide session cookie on each user login. Each web service then just calls RosLogin::isLoggedIn() to check its validity and retrieve the user name. No certificates, no protocol messages, and no heavyweight server software is involved. Together with centralized Login, Registration, and Self-Service pages, RosLogin currently needs no more than 1600 lines of PHP code - perfectly auditable from a security standpoint!
The ReactOS infrastructure is mostly built around PHP web services, so PHP bindings and plugins for Drupal, MediaWiki, and phpBB are currently the only ones available for RosLogin. However, our few non-PHP services can still plug into the same user database by connecting to RosLogin's underlying OpenLDAP directory.
OAuth 1.0a provides integrity only for the client request parameters. It does not provide integrity for the client request headers or body. It does not provide integrity for any of the server response - which means while clients can make valid requests, a MITM could be providing it completely fraudulent data to act on.
Likewise, since OAuth 1.0a does not provide integrity over the entire request message, it also does not provide non-repudiation over the entire request message, or non-repudiation of the client for any server responses.
Some organizations have chosen to leverage OAuth 1.0a with non-standardized extensions parameters like a body hash to try and partially solve these issues, but they tend to be very underspecified (which hashing algorithm? Before or after Content-Encoding / Transfer-Encoding ?)
Finally, there are no standardized modern cryptographic methods for OAuth 1.0a - it will likely remain on SHA-1 forever.
OAuth 2 provides integrity over the entire request/response by relying on TLS. Generally the only reason you would worry about this is if you are paranoid that:
- Clients are turning off TLS validation. Granted, this is still pretty common, and can be solved for your use case by requiring mutual TLS.
- Normally trusted intermediaries cannot be trusted not to modify your traffic. This would be something like a compromised TLS-terminating reverse proxy. This tends to not be solvable by mutual TLS.
There have been many failed attempts at improving on the signature algorithms in OAuth 1.0a - it turns out it is hard to make guarantees that there won't be legitimate reasons for an intermediary to muck with the request/response.
There is actually no reason why someone couldn't extract the existing request signature logic from OAuth 1.0a (sans token-secret in the HMAC, which IMHO was pointless) and propose it as an independent spec for doing end-to-end request signatures. But so far it seems nobody has wanted to do so, instead focusing (and stumbling) on trying to provide something better.
Oddly, if I want to do it, I rarely find this "baked in". Many other features which are less common, are, or else have middleware that makes it pretty much just "add this line to your config file".
I have had to mess with Oauth two or three times in ten years, so needless to say I am hapless at it. If I did it less often, or more often, it would be fine, but it's not. Does anyone know why this isn't rolled into frameworks like Django, as a boolean that you set to TRUE?
Even though the middleware handles JWT validation, cookie storage, redirects etc (which are a big part of it) there are still a tonne of little details to OAuth that clients need to know enough to specify:
- what grant flow are you using (code, hybrid, implicit, resource owner etc.. all are valid for different and overlapping scenarios, based on how secure you want to be and how much hassle you want your users to experience)
- client id and secret management, and registering this with your server
- refresh tokens: i.e. how do I prevent my user needing to log in every 15 minutes without making my jwt last an insecure amount of time.
etc. etc. etc. I've built OAuth/OIDC into dozens of apps, and have deployed my own identity providers (not rolled my own, thank the gods, just used a framework for the server side) and its still the bane of my development existence.
1. You want to perform a stronger form of authentication for users who are interacting with certain parts of your application
In this case, you can proceed in two ways. One would be to use the OpenID Connect authentication context parameter and request the stronger form of auth directly. I personally dislike this approach because it bleeds business logic into all your clients, although it is popular in some government use cases (where authentication levels are part of the API contract).
You can also have the applications represent the elevated security context as scopes (e.g. a moderation or admin scope). The business logic within the AS can interpret requests for these scopes as requiring stronger authentication
2. You want to reauthenticate the user when they go to an elevated security context.
The AS should be responsible for your authentication policy, and deployments tend to go simpler if you don't split this responsibility with the application.
The traditional way to do this would be similar to the scope rule above: scopes representing access to elevated contexts, and the AS recognizing it needs to enforce stricter authentication rules for this to work. For instance, you could have a rule that the user needs to reauthenticate every half hour, and make access tokens which have elevated privilege scopes have a limited lifetime to force retrieving a new token.
A more modern way to do this might involve stepping back and looking holistically at why you are reauthenticating the user. This might be because you are worried that someone may have walked up to an unlocked machine and proceeded to try to access administrative features.
For this, you might instead look at using EMM and machine policy to reduce that risk rather than impacting UX - such as reducing the time of the screen lock and eliminating the ability to use hot corners to keep the screen awake for users with administrative access. Your AS can then detect and rely on EMM compliance as a factor.
I haven't used Cognito, but I work for FusionAuth (https://fusionauth.io) and we have a number of developers that have switched from Cognito to FusionAuth because we cover more of their use cases outside of plain authentication.
Not sure if you have had similar experience with Cognito being limited.
OpenID Connect just standardised the existing practice of being authorised to use an identity provider as a type of authentication.
Edit: actually something something o8:
Again, this was required for USERS, not developers.
This is someone asking how to log in: https://webapps.stackexchange.com/questions/18899/how-do-i-f...
Combine these, and Google would have been perfectly capable of making everyone just type google.com. That they didnt' says more about Google than OpenID.
Besides, people were perfectly capable of showing the same pickers back then as you are forced to use with the current OIDC/OAuth2 abomination.
HN did the same thing. In fact pretty much every OpenID site implemented this weird expectation. Part of helping a standard get adopted is making sure it's implemented correctly.
There's nothing specified anywhere that allows an application to interrogate an RP about the available scopes. I just don't see how fine-grained resource access can be done with OIDC without requiring the user to grant much coarser access first. I found a draft spec once but it was abandoned and the author didn't reply to an email I sent...
WWW-Authenticate: Bearer realm="example", error="insufficient_scope", scope="view_topic"
In terms of a map of what scopes are required for all resources on a site - unfortunately just like other non-RESTful things that is conveyed outside of the interactions today via something like OpenAPI descriptors and documentation.
The AS can convey a list of requestable scopes in its metadata. There were specs like host_meta which could have been easily leveraged to do the same for a resource server, but I am not sure host_meta actually went anywhere.
So you can't implement an independant RS which does not care about clients and how they authenticate their users.
With OpenID Connect adds things like dynamic client registration for a client (acting as an RP) to get authentication information, and you can use OpenID Connect to get both that authentication token and an access token at the same time.
However, this unfortunately doesn't extend to weakening the dependency on a protected resource receiving one of these access tokens needing a relationship with the issuing AS.
Unfortunately, dynamic client relationships haven't caught on enough for there to be focus on this issue. But on the flip side, there would be a lot of other agreements around what those access tokens mean and what API they can apply to before you could have such a dynamic authorization relationship.
Now, if the goal is for _your own_ protected resources to rely on authorizations provided by other ASs - likely you just shouldn't do it that way. Instead, you should have a local AS that itself has relationships with multiple other ASs acting as an intermediary (or what Microsoft has historically called an STS). It authenticates based on other providers, gets their authentication and access tokens, and translates them into tokens that the local environment can understand based on a single, centralized policy.
As a resource server: https://tools.ietf.org/html/rfc6749#page-10
Access tokens don't have to be JWT. And some OpenID authentication server give opaque tokens: your resource server has to know how to call it to check the token and get some user infos if available.
- a value only the AS understands (in which case the protected resources will need to use an API to introspect them)
- a format that both the AS and protected resources can understand (such as a legacy crypto format, a COSE token, or a database key)
If your client has to pull apart an access token to extract information and make a decision, you are doing OAuth wrong. In particular, that client code will never work against another AS, and it is more fragile than normal against changes on the AS.
When you get a token you are not sure to get it as a JWT. If it is a JWT, the issuer is not enough to know what endpoints it provides.
You still have to care about your clients authorization provider so you can check their access tokens. But you're building a resource server: you should not have to care about all this. Creating a resource with this identity from this issuer? Go ahead. Want to access it again? Let me check your token using this standardized token: I don't have to know if is issued by Google, the Canadian Government or your own personal server.
Like in a big way that I have no idea what the fuck OpenID is, what authorization and authentication are in this context, and why oauth doesn't handle both.
Authentication = Am I this person?
Technically, OAuth only cares about Authorization. It provides an opaque token to the client that authorizes future requests. It doesn't tell the client anything about who the token is for.
Turns out most clients need to know who is logged in (such as loading basic profile information, email address, avatar, etc). In practice this is usually done through some sort of /user endpoint that accepts an OAuth token. This /user endpoint implementation is service specific.
OpenID Connect tries to fill that space by providing a standardized way to perform authentication and identity information.
Any assertion that you are dealing with a particular person needs to be targeted at you. There were security vulnerabilities with some early OAuth deployments that used OAuth for authentication which didn't understand this - and that allow one service that had a relationship with a user to act as that user against all other services which trusted that AS.
For Facebook, they solve this by having the token stamp information about which client it was issued to, which the client is required to verify. For OpenID Connect, they decided to preserve the ability of OAuth access tokens to be opaque to the client, and create a new token called an id_token.
I understand there are cases and people when this is needed but as for me I bloody never want anybody to access my "user data in another service" and that's why I strongly prefer plain old email sign-up: in many cases if you sign-up with your google account the service will also request your contacts list, your personal details and everything and I never want to share these.
Nevertheless the single sign-in feature OAuth/OpenID provide seems very convenient (unless implemented improperly when the service would take your OpenID and still ask you to set a password, a login name and/or e-mail address) so I still use it for some services.
In the enterprise space these solutions help a lot. Especially with the rise of enterprise cloud software solutions like Salesforce, Workday, and ServiceNow.
Chase recently switched over to OIDC which was pretty neat, but for most banks, it is an absolute nightmare.
The ideal use case here is for banks, and services that store a lot of sensitive data. If you want to pull your data from your bank into a service like Mint, you would ideally like for Mint to only get read access to your transactions, nothing else: no ability to send $$$, etc.
IIRC the user is rarely given a choice, same as when you install an Android app from the Google Play store - they just tell you the app demands to access this and that an give you an Ok/Cancel choice instead of options to allow or disallow access to every particular element in the app's demands list and still install it regardless to what does the app author want (I don't mind if some features won't work then). IMHO this is seriously wrong.
You're forgetting that local applications fall into this category too: email, calendaring, storage, messaging, remote access, music/movie streaming, VPN, Twitter, code repositories, container registries, cloud cli tools, chat, chatbots, database access, directory access.
Wouldn't it be nice if you never had to click "Save my password" or use a service account ever again?