Hacker News new | comments | ask | show | jobs | submit login
Nobody Cares About OAuth or OpenID Connect (okta.com)
164 points by rdegges 23 days ago | hide | past | web | favorite | 115 comments

I'm trying to build a simple Python app that authenticates against corporate Okta right now.

It's incredibly, painfully difficult. If you search for "Python Okta" you get this: https://developer.okta.com/code/python/

> At this time we do not support official API client libraries (SDKs) for Python. You may fork our legacy Python SDK or join the conversation on this thread and let us know how you’d like to use Okta from Python applications.

I have integrated both OpenID and OAuth by hand many, many times. If Okta offered a page of documentation like this one my project would already be finished: https://developer.github.com/apps/building-oauth-apps/author...

From the linked article:

> This is one of the reasons why, here at Okta, even though our entire platform is built on top of OAuth and OIDC, we spend tons of time and effort trying to build abstractions (in the form of client libraries) to hide those complexities and make securing your web applications simpler.

But this is the cause of my problem! The abstractions they are offering don't work for my use-case. The fact that they have hidden the complexity from me is actively preventing me from completing my goal.

I'm not arguing against abstractions here - what I need is BOTH. I need abstractions that can help me get my job done, combined with well documented non-hidden complexity for me to fall back on if the abstractions don't yet handle my use-case.

My favorite identity provider of all time is https://auth0.com

I was very impressed by their rules concept [1] that allowed me to create quite non-trivial authentication scheme in one day from scratch [2] They also pay attention to development workflows by having great logs and debugging tool chain as a part of their service, very cool!

[1] https://auth0.com/docs/rules [2] https://gravitational.com/blog/aws-github-sso/

Auth0 do the same thing though. They make library abstractions such as "passwordless" and make it a separate authentication flow altogether, where as in reality it is just a matter of generating temporary passwords. There is no need to not be able to just enable and disable temporary passwords for any user.

Also, try implementing generation of API-key like tokens for users that can be expired, revoked, authorized etc.

Auth0 have made products that hide complexity, in my opinion, but if those products fit your particular need, sure it works great.

Sounds like there is room in this market for competition

We don't have any official Okta Python library support at the moment :(

We don't have any full-time Pythonistas on staff right now, so deprecated our old stuff. We're hoping to hire and expand that role out to build proper SDKs + support in the future.

Right now, the best option is to use generic OIDC/OAuth compliant libraries (since Okta is a generic OAuth/OIDC provider). Sorry :(

The company I work for recently became an Okta Partner... and I'm a Software Engineer that primarily works in Python.

I am expecting this quarter to start building out a platform that will use Okta for authentication and authorization.

Do you think Okta would be willing to pay for some engineering hours if we were to take on the mantle of producing a modern Python SDK? If so, I'll be happy to start running that up the line here.

Failing that, are there any resources you recommend for going forward with Python and Okta, in addition to using generic OAuth libraries?

That is awesome! I'm going to forward this comment to our DevEx team and see if that's something we could do. Sounds great to me =D

Excellent -- I sent you an email to the address in your profile from my work email in case anything comes of it.

I don't particularly need a Python SDK. What I need is extremely clear documentation on how to roll my own authentication code against Okta - the equivalent of https://developer.github.com/apps/building-oauth-apps/author...

We use Okta internally where I work and it's been fairly smooth as far as the end user experience. I'm sure it hasn't been perfect but it's always a pleasure having one sign on for everything

My team did use Okta React briefly for an internal project but we couldn't get it to play nicely with Cypress (e2e testing) so we ended up removing it sadly.

I'm pretty sure we had just implemented it incorrectly though since we were still getting to grips with React at the time (as a team that previously had little to no combined UI experience) :)

Interesting, someone I worked with at a previous job left there to go to Okta. He did our Django/python Auth stuff. His Linkedin says he's still at Okta.

Pretty much the same situation here, we just disabled 2fa today on our aws integration because the web-only flow made it impossible to use cli tools.

You don't have to use a web-only flow for MFA. We use a tool that prompts for MFA the execs any CLI with the environment setup with the STS credentials e.g. https://gist.github.com/rectalogic/e99c10bd43a2a8f6542680953...

We just use google oauth which kicks you to okta. This lets us use the x/oauth/google packages from google itself.

FWIW, if all you're doing is authenticating a web application, you really can just do it off of the OIDC spec. It's a small subset of the protocol and can be easily understood. I did it recently, writing a multi-tenant application where each tenant can have its own OIDC providers, and it took me about three hours to do and to do correctly (verified against Auth0 and Okta, at least).

Barring that, Okta at least is a conformant OIDC implementation and you can use standard libraries for it.

I'm surprised no one has mentioned FusionAuth (https://fusionauth.io). It's free, has a Python library, and the features that most apps need. There are a couple of people using Python with FusionAuth right now.

The community for FusionAuth is growing quickly and it has an open issue tracker (https://github.com/FusionAuth/fusionauth-issues), good docs (https://fusionauth.io/docs/v1/tech/), and many open source projects as well (https://github.com/FusionAuth).

I'm one of the developers of FusionAuth, so you should give it a try and lot us know what you think.

I like what I see provided out of the box by FusionAuth, but I'm rather weary of the hard dependency on Elasticsearch.

I'm curious why the need for Elastic. You already have a database which can be used to search all the user objects. What does Elastic bring to the party other then an extra ops burden?

This is not necessarily a criticism, just trying to understand what went into that architectural decision.

Elastic provides a much more scalable and fast search for freeform user attributes. Using the databases search works in some cases but when you have freeform data (for example favoriteColor), it breaks down quickly.

While Elastic can be a bit cumbersome, we have made bundles that make it simpler to deploy and manage. We have also worked hard to secure it.

Running FusionAuth in Docker or Kubernetes is a huge benefit to those running on premise.

The say they have a REST API. Could you use that directly from Python instead of using an SDK?

I've seen a similar situation with payment processors, where they have SDKs in a variety of languages, but not the one I'm using (our back end is in Perl), so I use the underlying low level API.

Amusingly, it often happens that I look at their Java or PHP SDKs, which involve seemingly a bewildering number of classes and methods, and find that just using their underlying REST API directly is massively simpler, clearer, and takes less code, to the point I don't use their SDKs even if I am working in a language they have an SDK for.

But payment processing is pretty simple. Third party authentication is probably more complicated, and so it could be that the SDKs are doing a lot more than just providing a fairly straightforward wrapper for the underlying low level API.

We also ran into issues with using the Okta OAuth/OIDC integration with Teleport and ended up using SAML [0]. Perhaps more painful, especially if you are more familiar OAuth/OIDC but it worked. This was when Okta OAuth/OIDC was still in beta so perhaps it has improved since then.

[0] https://developer.okta.com/authentication-guide/saml-login/ ; https://gravitational.com/teleport/docs/ssh_okta/

Okta's position in this article is missing the bootstrapping problem.

Yes, you don't want people rolling their own crypto libraries. But if nothing has been rolled yet for your environment by anyone? You need a standardized open protocol that you have at least a chance of rolling correct crypto against.

Closing that protocol up in a nice shiny (but exclusively-owned-and-maintained) box does you no good if the box can't be bent to fit your use case.

I'm using Flask-Dance, it works rather nicely.

The article provides a very light history and technically shallow description of OAuth and OIDC so it can advertise Okta. Essentially, "these two protocols are complicated, and you probably don't care, so you should buy Okta."

Except this is Hacker News, where caring is fundamental. I'll pass on the Okta advertisement.

I'm legitimately not trying to advertise for Okta here at all.

This topic is near and dear to my heart because I've been working in this industry for many years now, and it is frustrating that there is a huge focus on building more OAuth/OIDC tools and encouraging developers to get directly involved in working with things like JWTs directly.

There are so many ways to mess things up at foundational levels today, I just really want to see better tooling created in the open source communities (and in paid products) to abstract things like OAuth/OIDC so that developers don't need to constantly be fiddling around with these lower level protocols where the risk for messing up is extremely high.

Advertisements with their hearts in the right place are still advertisements. If your tool were open source (it may be? not clear from TFA...) no one would complain. Since it's commercial and TFA takes such an anti-hacker stance, the complaint above is not surprising. It's not difficult to imagine that TFA might be very effective in speaking to some of your customers. With the customers who hang out here, a different essay with a different emphasis (e.g. "this is how everyone gets OAuth wrong and here's how we do it right") might be more effective.

I appreciate your blog posts. They're consistently interesting and useful. If you can do what you love and align that with your work, more power to you.

Whose bread I eat his song I sing.

I have a ton of respect for Todd but yes, this writer is just doing their job which is to promote okta

I agree the article title comes off as a bit tone deaf to the developmer community.

Maybe it's aimed at management?

Okta is not a replacement for either of those things, and I didn't see the plug in the article for their services that you mention.

It's at the end: "This is one of the reasons why [...] we spend tons of time and effort trying to build [...] client libraries [...] to hide those complexities and make securing your web applications simpler."

It may be the lightest of plugs, but it means that the writer is biased. Being biased doesn't mean you're wrong, but it throws the entire argument into doubt. I feel like I wasted my time.

Also, any security professional who just mentions in passing that OAuth was for authorization loses a little of my trust. It's true that Auth is short for Authorization. But an important nuance is that it isn't authorization for the user but for the application, authorization by the user for this new app to get some information from one of the user's old apps (like Google). For a programmer like me, this clears up why OAuth stands for authorization but always seemed more like authentication. From my understanding, OAuth doesn't handle any authorization of the user within your app. You have to handle that some other way.

The redeeming feature about OAUTH, like pretty much anything security related, is that if you don’t give people a standard to follow then you are probably going to end up with something that is insecure and poorly implemented.

A lot of people just don’t have the background to do good security work, and when you try to explain why a short lived credential is good, or why we can’t have a plain text password embedded in the git repo, their eyes glaze over and you lose them. So it’s much easier to just dictate it will be a tls connection and you’ll use OAUTH, and then they go download a library and everything is good.

Where I ran into problems is with using OAUTH for federated user logins. I did a proof of concept with Facebook, then after a couple hours they decided I was a robot and that was that - account closed, no recourse. Luckily that happened before I had a large user base using it - I don’t know what you do if Facebook decides your a robot and suddenly 50,000 people can’t authenticate to your site, guess you learn something. A week later most of the other sites decided I was a robot and terminated my accounts. Google is the only site that didn’t. (Which is good because I have a decade of e-mail and photos I’d hate to lose)

>don’t know what you do if Facebook decides your a robot and suddenly 50,000 people can’t authenticate to your site

This is exactly why I am paranoid about depending entirely on social sign up/in. So if the API provides it, I believe it's a good idea to capture the email address associated with the social account and either generate a hard password for later reset if necessary or request the user set one up. The latter obviously reduces a little convenience of the social sign up, but does hint to the user that they can sign-in otherwise and gives them control.

Authentication is just too important to completely outsource without recourse.

> Authentication is just too important to completely outsource without recourse

I agree 100%. You need to own your users and all of their data. Along those same lines, I wouldn't want my user data going into a multi-tenant, cloud-hosted solution like Okta. I'd much prefer to store everything on my own servers or use an on-premise solution like FusionAuth or KeyCloak.

>...use an on-premise solution like FusionAuth or...


I see what you did there. :)

Good luck.

Haha. I try to be upfront that I work for FusionAuth. I mentioned it in my other comments in the thread but forgot to mention it here. Thanks for keeping me honest. ;)

Not to mention your users ignoring your site because they don't think FB (or GitHub, or Google, or whoever) need to know about you logging in there. I refuse to use any site that doesn't let me create an account.

The author of this article cannot be trusted. He regularly writes articles that compare non-best practices of JWT to best practices of other technologies to make JWT look bad. Because of him, I would not trust Okta at all. Take any article by Okta with a grain of salt.

What practices would that be? Both good and bad is interesting to me.

I recently decided against using jwt bearer tokens due to the concerns expressed in both rfcs and owasp of being to risky when compared to session cookies.

But really I’m wondering if the risk of screwing up csrf protection isn’t more of a concern than token leaks.

I've read some of those articles and I found them fairly informative. Could you be more specific about the best practices that invalidate his arguments?

The definition of sour grapes right here

If you need to implement your own OIDC/OAuth I highly recommend Identity Server 4[1]. I've used it in production and it was fairly painless to get up going. It's .NET based & battle-tested. It has lots of samples on GitHub to show you how to get going for the various flows (which they even walk you through in their docs) and has a quickstart UI so you've got something you can brand quickly to give users a contiguous login experience. However, if you can manage not to host your own (don't want to mess with it), I recommend either Auth0[2] or the one of the AWS/Azure solutions mentioned elsewhere.

[1]https://github.com/IdentityServer/IdentityServer4 [2]https://auth0.com/

I implemented a minimal OpenID Connect server internally a few years back when it was difficult to find something for our specific use case in Go. I think that while the general idea is good, the spec is a mess: it's too flexible in all the wrong ways, so every large player implements it slightly differently, leaving you with no clear path to go yourself; and actually making the protocol (... concept?) less secure than it could be.

I kept thinking that for our use case, a simpler aead token or perhaps something like Maracoons (if there would be more battle-tested libraries for it...) would suffice. In the end I implemented a partial solution with one flow and one JWT signing algorithm so the other services could rely on existing client libraries for their respective programming languages.

Of course none of it ended up being used, as SAML (there's another can of worms...) support became a requirement and some large framework was brought in for that corporate software feel-good factor...

I agree with the sentiment that it would be nice if there would be new development in this area, even (or especially) if none of the big players are involved, but non-trivial crypto protocols and frameworks seem to need a lot of momentum to break out of their local ecosystems. It's not easy to design something that fits different use cases yet doesn't break at the joints.

> I think that while the general idea is good, the spec is a mess: it's too flexible in all the wrong ways, so every large player implements it slightly differently, leaving you with no clear path to go yourself

This was my experience doing _anything_ at all with OAuth. Now I am seeing the same with OpenID Connect, though not as severe. The spec was narrowed a bit, but it is still way too broad.

I recommend looking at the basic client profile of OIDC to start. It is a good cut through all the options.

Also you may want to look at the OIDC conformance testing tools for testing if you are building your own library or server. Certification costs money, but the conformance testing tools I believe are all open source.

I recently built an app that outsourced the entirety of the account creation, email validation, MFA and authorization to AWS Cognito and AWS ELB. All I have to do is verify a signed JWT passed in by a header by the ALB (and configure all that stuff and automate it into CloudFormation).

Never building this infrastructure again. What a huge time saver.

My company used Okta. We ended up cancelling it as it did not seem to offer any value whatsoever and was crazy expensive for our 300-odd users.

Is that architecture helpful for a saas app that needs to support users from corporate customers using single sign-on via an OIDC or SAML provider like Azure AD or ADFS?

I looked into using Okta or Auth0 for this sort of setup, but both were prohibitively expensive for a saas app.

We integrated one third party provider into our AWS Cognito as a test - Google. The idea was to allow many and myriad third party providers to integrate with it, and we wouldn't have to bother to do any of the integration: just configure the Cognito and the other sides, and our app logic does not have to care (but can get that info from the JWT payload if it cares).

FusionAuth is free. Less pain, more awesome! ;)

And it integrates with AAD and ADFS.

> We ended up cancelling it as it did not seem to offer any value whatsoever and was crazy expensive for our 300-odd users.

When you canceled Okta, did you build it in house, or was the AWS Cognito / AWS solution the replacement?

Did you look at anything on-prem like Keycloak or FusionAuth?

We had a great time with Keycloak, using it as an OpenID Connect provider. I haven't tried the social identity federation or Active Directory Federation either.

We did not replace Okta with anything whatsoever.

When I skimmed the cognito docs a bit ago I saw them talk almost exclusively about using it to control access to AWS resources. Would you recommend it for general web app auth as well?

Edit: typo

Okta is free for up to 1000 monthly unique users, not sure why it's expensive, just looked at developer.okta.com/pricing

Try asking for an SLA or any feature development or 24/7 support. You'll see why people think Okta and Auth0 are expensive. ;)

We were paying around $100/user/year for our around 300 users.

I agree that the learning curve is too steep with OAuth and OpenID and that supporting multiple scenarios could be part of the problem.

One specific problem I've faced is that it's very complicated to configure a development environment for unit and integration testing. I think there is huge demand for something that works well for Docker Compose and unit testing frameworks while still being a production grade solution.

I haven't evaluated everything out there but so far the best solution I've found is Keycloak. Unfortunately it takes 20 seconds to start on my computer and up until very recently I had to go through the configuration procedure each time you launched tests. I was fortunate enough to get a PR (https://github.com/jboss-dockerfiles/keycloak/pull/152) accepted which allowed me to load a working configuration file directly into the unaltered Docker container with some users and roles for testing purposes.

So if you have something that starts quickly, is easy to combine with Docker Compose and with unit testing frameworks, and could be used in a production scenario... Let me know. ;-)

Oh, and it also needs to be on-premise.

Keycloak seems to be a pretty good on-premise option. Hard to know what will come of it now that IBM bought RedHat.

If on-premise is a requirement, you may want to look at FusionAuth. On premise, runs on Mac, Linux, Windows, Docker, and Kubernetes. This should check all of your boxes, installs in a few moments with the fast path install or docker-compose up. https://fusionauth.io/

I agree that Keycloak + Docker Compose is a great combination for development and testing. We use this setup in JHipster (https://www.jhipster.tech) when you choose OIDC for authentication. It auto imports users, roles, and OIDC apps so everything works right away.

Matt, you guys should switch JHipster to FusionAuth! Less IBM-ness you know. ;)

Maybe off-topic a bit, because it only handled authentication, but I really miss Mozilla Persona.

You just included some javascript in your frontend, maybe a 5-line function in the backend, and people could authenticate to your web app.

No need to store a password and correctly crypt the hash, provide a password change function, provide a password reset function (what a PITA to get correct).

Why hasn't anybody created something comparable yet?

This is a sales piece for Okta. And I disagree with them.

OpenId Connect and OAuth work great and are reasonably designed. The documentation is dense but you'll do fine if you read some of the popular blogs about it summarizing the different flavors.

Their comparison to "rolling your own crypto" is jarring. Rolling != using. You're likely a moron if you write your own crypto library and that applies to not using pre-made OAuth/OIDC libraries as well.

The complexity of OpenID Connect or even just SAML is exactly why I developed RosLogin for the ReactOS infrastructure: https://github.com/reactos/web/tree/master/www/www.reactos.o...

Our web services are all running under the same base domain reactos.org and we wanted a Single Sign-On system for all of them. I was surprised to find out that doing this simple seems to be an unresolved problem: CAS, OpenID Connect, and SAML all want you to set up heavyweight authentication servers and a certificate infrastructure for identifying each participating web service. A lot of protocol messages need to travel for a simple action like a user login when a site-wide session cookie could just do the same job. Sure, those systems support advanced features like access control and delegated authentication, but this is all not required if you just want to link a few web services under your own control, say a MediaWiki and phpBB forums.

RosLogin simply sets a site-wide session cookie on each user login. Each web service then just calls RosLogin::isLoggedIn() to check its validity and retrieve the user name. No certificates, no protocol messages, and no heavyweight server software is involved. Together with centralized Login, Registration, and Self-Service pages, RosLogin currently needs no more than 1600 lines of PHP code - perfectly auditable from a security standpoint!

The ReactOS infrastructure is mostly built around PHP web services, so PHP bindings and plugins for Drupal, MediaWiki, and phpBB are currently the only ones available for RosLogin. However, our few non-PHP services can still plug into the same user database by connecting to RosLogin's underlying OpenLDAP directory.

OAuth 1.0a handles both authorization and authentication. I still think it’s unfortunate that Big companies are pushing for OAuth 2.0 and trying to blindsided developers as if OAuth 2.0 is an upgrade to OAuth 1.0a. It is not! OAuth 1.0a provides authenticity, integrity, and non-repudiation. Something that OAuth 2.0 cannot match.

OAuth 1.0a does not provide authentication to the client. There is nothing about a token that tells a client which user it represents, or if that user is currently authenticated. You can imply there was possibly some authentication guarantee against some user by the Service Provider if you request a new token, but that's it.

OAuth 1.0a provides integrity only for the client request parameters. It does not provide integrity for the client request headers or body. It does not provide integrity for any of the server response - which means while clients can make valid requests, a MITM could be providing it completely fraudulent data to act on.

Likewise, since OAuth 1.0a does not provide integrity over the entire request message, it also does not provide non-repudiation over the entire request message, or non-repudiation of the client for any server responses.

Some organizations have chosen to leverage OAuth 1.0a with non-standardized extensions parameters like a body hash to try and partially solve these issues, but they tend to be very underspecified (which hashing algorithm? Before or after Content-Encoding / Transfer-Encoding ?)

Finally, there are no standardized modern cryptographic methods for OAuth 1.0a - it will likely remain on SHA-1 forever.

OAuth 2 provides integrity over the entire request/response by relying on TLS. Generally the only reason you would worry about this is if you are paranoid that: - Clients are turning off TLS validation. Granted, this is still pretty common, and can be solved for your use case by requiring mutual TLS. - Normally trusted intermediaries cannot be trusted not to modify your traffic. This would be something like a compromised TLS-terminating reverse proxy. This tends to not be solvable by mutual TLS.

There have been many failed attempts at improving on the signature algorithms in OAuth 1.0a - it turns out it is hard to make guarantees that there won't be legitimate reasons for an intermediary to muck with the request/response.

There is actually no reason why someone couldn't extract the existing request signature logic from OAuth 1.0a (sans token-secret in the HMAC, which IMHO was pointless) and propose it as an independent spec for doing end-to-end request signatures. But so far it seems nobody has wanted to do so, instead focusing (and stumbling) on trying to provide something better.

I agree! I really liked OAuth 1.0, especially from a security POV. But, the working group has moved on, and OAuth 2 is the standard now :x

Not everyone is using OAuth 2. Mastercard makes a security decision to stay with OAuth 1.0a. https://news.ycombinator.com/item?id=17482178

Also twitter whenever the app is supposed to act on behalf of a user. (OAuth 2 is simpler, and is available for accessing public data not on behalf of a user.)


Twitter is very reluctant to find out how much removing OAuth 1.0a (and their previous incarnations like XAuth) will break in their legacy infrastructure and for existing trusted clients. A migration path would mean adding OAuth 2 in addition to all the previous mechanisms, and then likely acting over a long migration timeline.

Its worth noting that MasterCard has made non-standardized extensions onto OAuth 1.0a to meet their needs.

So, I know I'm naive and all, but it seems odd to me that this isn't baked into normal web frameworks. It is entirely ordinary for a website to want to allow a user to login using Google/FB/Yahoo/whatever other id's, so that they don't have to make a new id (and remember a new password) just for your website. Probably half of the websites made with frameworks want this.

Oddly, if I want to do it, I rarely find this "baked in". Many other features which are less common, are, or else have middleware that makes it pretty much just "add this line to your config file".

I have had to mess with Oauth two or three times in ten years, so needless to say I am hapless at it. If I did it less often, or more often, it would be fine, but it's not. Does anyone know why this isn't rolled into frameworks like Django, as a boolean that you set to TRUE?

Its baked into ASPNET Core, and its still a pain in the ass.

Even though the middleware handles JWT validation, cookie storage, redirects etc (which are a big part of it) there are still a tonne of little details to OAuth that clients need to know enough to specify:

- what grant flow are you using (code, hybrid, implicit, resource owner etc.. all are valid for different and overlapping scenarios, based on how secure you want to be and how much hassle you want your users to experience)

- client id and secret management, and registering this with your server

- refresh tokens: i.e. how do I prevent my user needing to log in every 15 minutes without making my jwt last an insecure amount of time.

etc. etc. etc. I've built OAuth/OIDC into dozens of apps, and have deployed my own identity providers (not rolled my own, thank the gods, just used a framework for the server side) and its still the bane of my development existence.

That is another good point - generally you want to force a user to reauthenticate whenever they enter an elevated security context (account management, personal information, etc). With the OAUTH flows there doesn’t appear to be s way to do this. If I tried to send them through the login process again, they would just end up back at my callback address because their token was still valid.

Two ways to interpret your problem:

1. You want to perform a stronger form of authentication for users who are interacting with certain parts of your application

In this case, you can proceed in two ways. One would be to use the OpenID Connect authentication context parameter and request the stronger form of auth directly. I personally dislike this approach because it bleeds business logic into all your clients, although it is popular in some government use cases (where authentication levels are part of the API contract).

You can also have the applications represent the elevated security context as scopes (e.g. a moderation or admin scope). The business logic within the AS can interpret requests for these scopes as requiring stronger authentication

2. You want to reauthenticate the user when they go to an elevated security context.

The AS should be responsible for your authentication policy, and deployments tend to go simpler if you don't split this responsibility with the application.

The traditional way to do this would be similar to the scope rule above: scopes representing access to elevated contexts, and the AS recognizing it needs to enforce stricter authentication rules for this to work. For instance, you could have a rule that the user needs to reauthenticate every half hour, and make access tokens which have elevated privilege scopes have a limited lifetime to force retrieving a new token.

A more modern way to do this might involve stepping back and looking holistically at why you are reauthenticating the user. This might be because you are worried that someone may have walked up to an unlocked machine and proceeded to try to access administrative features.

For this, you might instead look at using EMM and machine policy to reduce that risk rather than impacting UX - such as reducing the time of the screen lock and eliminating the ability to use hot corners to keep the screen awake for users with administrative access. Your AS can then detect and rely on EMM compliance as a factor.

Django doesn't work nicely with OAuth, but Flask has Flask-Dance which works rather well.

The reason it (OAuth) was built was for user convenience one-click signing in for the most part and frankly that is more important than most considerations or tradeoffs if it's accomplishing the job. I do feel like this is always a bear to put in apps and I've even gone to the lengths of making a highly reusable microservice to handle this for my apps because it is just annoying in general to keep setting up. I don't like igcognito or firebase as they are too limiting on providers. I just took the approach of using an already rich ecosystem of Node.js and Passport compatible plugins for providers and having config lookup based on both signed requests coming in and where they originate. Was definitely worth the time doing this.

We've started using AWS Cognito which provides social logins and oauth and even mobile client and web sdks for integrating. Writing yet another authentication system was just too much and it's great how there's services like okta and aws that are stepping up and letting people just use those and at least in the case of Cognito, at minimal cost.

I've heard that Cognito is very limited in terms of features and can be a headache to get working properly.

I haven't used Cognito, but I work for FusionAuth (https://fusionauth.io) and we have a number of developers that have switched from Cognito to FusionAuth because we cover more of their use cases outside of plain authentication.

Not sure if you have had similar experience with Cognito being limited.

Well Cognito is a different product from what fusionauth is. As far as I know, Cognito is not going to be what you want to use for SSO for example. It's more for consumer facing services. As for hard to set up, that hasn't been my experience. If you're familiar with aws already it's trivial and interacts with other aws services. Like you can let users directly upload to s3, authenticate automatically via api gateway. I got it working pretty quickly. The mobile app support is great too. Cognito is also much cheaper than okta. You don't pay anything for the first 50,000 monthly active users.

Original OpenID implementation was so simple and the scope of it was so small, i wonder where we got all this complexity

for OAuth 2.0 at least, the creator disowned it because the process spun out of control


OpenID (not connect) had a weird OpenID URL users had to care about. Mine was something something google something u8. Nobody cared. Sites like HN removed OpenID logins. OpenID died.

OpenID Connect just standardised the existing practice of being authorised to use an identity provider as a type of authentication.

Edit: actually something something o8:


Again, this was required for USERS, not developers.

This is someone asking how to log in: https://webapps.stackexchange.com/questions/18899/how-do-i-f...

This is an implementation issue, not a spec issue. Pages can delegate to another identity provider (primarily used for vanity URLs, for example I could say that https://nullable.se/ should map to my Google OpenID, for example), and the identity provider can claim a different URL than you asked for (for example, Steam always uses https://steamcommunity.com/openid as the provider, but will claim a different URL for each user).

Combine these, and Google would have been perfectly capable of making everyone just type google.com. That they didnt' says more about Google than OpenID.

Besides, people were perfectly capable of showing the same pickers back then as you are forced to use with the current OIDC/OAuth2 abomination.

> That they didn't says more about Google than OpenID.

HN did the same thing. In fact pretty much every OpenID site implemented this weird expectation. Part of helping a standard get adopted is making sure it's implemented correctly.

I kinda feel like this was on the right track but then it turned into, let me sell you something.

Isn't this like a bicycle company saying no-one cares about walking anymore ?

Here's my problem with OIDC and I don't know where to bring this up to find out if my hopes are widely off or I've missed something big and it already exists...

There's nothing specified anywhere that allows an application to interrogate an RP about the available scopes. I just don't see how fine-grained resource access can be done with OIDC without requiring the user to grant much coarser access first. I found a draft spec once but it was abandoned and the author didn't reply to an email I sent...

If you make a request without a token (or with an insufficient token), the resource can respond with the required scopes. As in:

WWW-Authenticate: Bearer realm="example", error="insufficient_scope", scope="view_topic"

In terms of a map of what scopes are required for all resources on a site - unfortunately just like other non-RESTful things that is conveyed outside of the interactions today via something like OpenAPI descriptors and documentation.

The AS can convey a list of requestable scopes in its metadata. There were specs like host_meta which could have been easily leveraged to do the same for a resource server, but I am not sure host_meta actually went anywhere.

And a problem for the Resource Server: no common way to check the validity of a token. You may be lucky and get a JWT but even then there's no specified URL to get the keys to check it. Usually you get an opaque token and hope the client gives you a way to know where it comes from so you can do some specific server to server call.

So you can't implement an independant RS which does not care about clients and how they authenticate their users.

One of the complexities that a lot of people don't realize is that the roles involved in OpenID Connect and OAuth are different - OpenID Connect only minimally involves resource servers (in that they can use an access token to hit the user info endpoint to get information on the user session.)

With OpenID Connect adds things like dynamic client registration for a client (acting as an RP) to get authentication information, and you can use OpenID Connect to get both that authentication token and an access token at the same time.

However, this unfortunately doesn't extend to weakening the dependency on a protected resource receiving one of these access tokens needing a relationship with the issuing AS.

Unfortunately, dynamic client relationships haven't caught on enough for there to be focus on this issue. But on the flip side, there would be a lot of other agreements around what those access tokens mean and what API they can apply to before you could have such a dynamic authorization relationship.

Now, if the goal is for _your own_ protected resources to rely on authorizations provided by other ASs - likely you just shouldn't do it that way. Instead, you should have a local AS that itself has relationships with multiple other ASs acting as an intermediary (or what Microsoft has historically called an STS). It authenticates based on other providers, gets their authentication and access tokens, and translates them into tokens that the local environment can understand based on a single, centralized policy.

Look at the JWT, take the issuer claim, take the .well-known config off that url, and you have keys. Match that against the known permanent issuer you expected, and verify the signature of the JWT against the keys. If you're accepting things that aren't JWTs then you're not doing oauth.

> If you're accepting things that aren't JWTs then you're not doing oauth.

As a resource server: https://tools.ietf.org/html/rfc6749#page-10

Access tokens don't have to be JWT. And some OpenID authentication server give opaque tokens: your resource server has to know how to call it to check the token and get some user infos if available.

Yes, I misread and typo'd.

Only OpenID Connect has a guarantee that it will issue a JWT token (and only the id_token). OAuth allows access and refresh tokens to be opaque to the client - e.g. either:

- a value only the AS understands (in which case the protected resources will need to use an API to introspect them) - a format that both the AS and protected resources can understand (such as a legacy crypto format, a COSE token, or a database key)

If your client has to pull apart an access token to extract information and make a decision, you are doing OAuth wrong. In particular, that client code will never work against another AS, and it is more fragile than normal against changes on the AS.

Yes, I misread and typo'd, thinking OP was talking id_tokens and OIDC. Completely correct on the last para, I spend quite a bit of time convincing partners of this.

That's what the Introspection endpoint should be for.

But it is the problem of Oauth 2.0. Every implementer has a choice for lot of things.

When you get a token you are not sure to get it as a JWT. If it is a JWT, the issuer is not enough to know what endpoints it provides.

You still have to care about your clients authorization provider so you can check their access tokens. But you're building a resource server: you should not have to care about all this. Creating a resource with this identity from this issuer? Go ahead. Want to access it again? Let me check your token using this standardized token: I don't have to know if is issued by Google, the Canadian Government or your own personal server.

Years ago, I did care about adding FB login for a while, until my app has been removed from approval because of some strict guidelines. I'm pretty sure it required effort to resolve. From these days, I didn't even consider adding any oauth unless good presence for the project is achieved.

This article kinda glossed over what the fuck openID Connection is actually for.

Like in a big way that I have no idea what the fuck OpenID is, what authorization and authentication are in this context, and why oauth doesn't handle both.

Authorization = Can I do this?

Authentication = Am I this person?

Technically, OAuth only cares about Authorization. It provides an opaque token to the client that authorizes future requests. It doesn't tell the client anything about who the token is for.

Turns out most clients need to know who is logged in (such as loading basic profile information, email address, avatar, etc). In practice this is usually done through some sort of /user endpoint that accepts an OAuth token. This /user endpoint implementation is service specific.

OpenID Connect tries to fill that space by providing a standardized way to perform authentication and identity information.

More than that, the bearer model of OAuth doesn't guarantee the token represents a particular user or even a particular client.

Any assertion that you are dealing with a particular person needs to be targeted at you. There were security vulnerabilities with some early OAuth deployments that used OAuth for authentication which didn't understand this - and that allow one service that had a relationship with a user to act as that user against all other services which trusted that AS.

For Facebook, they solve this by having the token stamp information about which client it was issued to, which the client is required to verify. For OpenID Connect, they decided to preserve the ability of OAuth access tokens to be opaque to the client, and create a new token called an id_token.

> Back in pre-2007, there was no way for developers to build apps that needed to securely access user data in another service.

I understand there are cases and people when this is needed but as for me I bloody never want anybody to access my "user data in another service" and that's why I strongly prefer plain old email sign-up: in many cases if you sign-up with your google account the service will also request your contacts list, your personal details and everything and I never want to share these.

Nevertheless the single sign-in feature OAuth/OpenID provide seems very convenient (unless implemented improperly when the service would take your OpenID and still ask you to set a password, a login name and/or e-mail address) so I still use it for some services.

Just think about companies like Mint who scrape your banking and financial information using your creds. Not sure if Mint has changed this or not (or if the banks opened an API) but at one point it was the case.

In the enterprise space these solutions help a lot. Especially with the rise of enterprise cloud software solutions like Salesforce, Workday, and ServiceNow.

They still do that for 99% of the banks, since most banks don't support OAuth/OIDC yet :(

Chase recently switched over to OIDC which was pretty neat, but for most banks, it is an absolute nightmare.

There are several initiatives (such as Open Banking) which are mandating certain levels of access.

When I read about that, I lost my mind. People who give their banking creds to someone who isn't their bank are insane.

That's one of the nicer things about these flows: you can have scoped data that is allowed, and nothing more.

The ideal use case here is for banks, and services that store a lot of sensitive data. If you want to pull your data from your bank into a service like Mint, you would ideally like for Mint to only get read access to your transactions, nothing else: no ability to send $$$, etc.

> That's one of the nicer things about these flows: you can have scoped data that is allowed, and nothing more.

IIRC the user is rarely given a choice, same as when you install an Android app from the Google Play store - they just tell you the app demands to access this and that an give you an Ok/Cancel choice instead of options to allow or disallow access to every particular element in the app's demands list and still install it regardless to what does the app author want (I don't mind if some features won't work then). IMHO this is seriously wrong.

> as for me I bloody never want anybody to access my "user data in another service."

You're forgetting that local applications fall into this category too: email, calendaring, storage, messaging, remote access, music/movie streaming, VPN, Twitter, code repositories, container registries, cloud cli tools, chat, chatbots, database access, directory access.

Wouldn't it be nice if you never had to click "Save my password" or use a service account ever again?

I care...

tl;dr nobody cares how the sausage is made. they just want the sausage. they are hungry.

Unless I'm misinterpreting, this apparently includes the people whose job it is to make the sausages (i.e. Okta).

After spending enough time making sausages, I too, would not want to know

Ha.. got a good laugh.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact