Hacker News new | past | comments | ask | show | jobs | submit login
Google OAuth is broken (sort of) (trufflesecurity.com)
350 points by mooreds on Dec 21, 2023 | hide | past | favorite | 182 comments



I work in this space and have dealt with variants of this exact vulnerability 20+ times in the last 3-4 years across a wide range of login providers and SaaS companies. The blog post is correct, but IMO the core problem itself is too far gone to be fixable. Delegated auth across the internet is an absolute mess. I have personally spoken with plenty of Google and Microsoft engineers about this, so I can guarantee they are all already well aware of this class of problems, but changing the behavior now would just break too many existing services and decades-old corporate login implementations.

The "fix" at this point is simply – if you are using "Sign in with XYZ" for your site, do not trust whatever email address they send you. Never grant the user any special privileges based on the email domain, and always send a confirmation email from your own side before marking the address as verified in your database. All the major OAuth providers have updated their docs to make this explicit, as the post itself points out. In fact I'm surprised there even was a payout for this.


I feel as though this is a consequence of organizations not really understanding how complex the space truly is. The way I've watched OAuth2 + OIDC get adopted in various companies was never from a security-first perspective; rather, it's always sold as a "feature": login with x, etc. Even when there are moves to make flows more secure - PKCE, for example - you end up playing a game of "whack-a-mole" with various platforms doing shitty things in terms of cookie sharing, redirect handling, and the like. The fundamentals of 3-legged OAuth2 are sound and there's tons of prior art (CAS comes to mind), but the OpenID Foundation should be tarred and feathered for the shitty way they market and sold OIDC.


OpenID Connect and all its extensions are so high in complexity and scope. The documents themselves are massive and written in a quite hard to understand form. I've implemented many protocols and RFCs so I feel I have some experience.

Because OpenID Connect and OAuth2 are so closely related, I worry that some of this overengineering is making it's way back into new OAuth2 extensions.

I'm worried both will eventually collapse under their own weight, creating a market for a new, simpler incumbent and setting us back another 10 years as all this has to get reinvented again.

My outside impression is that the OIDC folks are highly productive with really strong domain knowledge and experience, but they're not strong communicators or shepherds with a strong enough vision.

The sad thing is that this is the second thing with the OpenID name that's going down this path. The original OpenID concept was great but also collapsed due to their over-engineering.


Agree - you only need to look at things like the hybrid flows to see where things fall apart: why would you issue an id_token that contains user information to a client which hasn't yet fully authenticated itself via a code-to-token exchange with passing it's client_id + secret? If you look at certain implementations, such as Auth0, you'll find that they actually POST that token back at the redirect_uri, since, a) it's at least registered against the client; b) it's not subject to capture as easily. The spec says NOTHING about protecting this info, though.


> Never grant the user any special privileges based on the email domain, and always send a confirmation email from your own side before marking the address as verified in your database

Doesn't this fail if the user registered an account (on google) with the plus sign address, and then signed into your service with that google account before getting the boot? Unless you're sending a verification email for every sign-in...


The idea is that the company's Zoom admin should always specify the exact list of users who are allowed to be in their Zoom account. The email domain should have no bearing. So if you sign in as bob@mycorp.com, and that is a valid corporate account with the right permissions, you are let through. If you try bob+foo@mycorp.com, it should always fail. The pattern of "oh they have a mycorp.com email so they are probably legit" was broken from the start.

This is thankfully less of an issue now since everyone is moving to SAML-based logins and SCIM provisioning.


> This is thankfully less of an issue now since everyone is moving to SAML-based logins...

Hate to break it to you but SAML is same shit different coat of paint, the xml encryption/signature/encoding stuff it pulls makes it just as much a tarpit for bugs and misconfiguration.

SCIM seems pretty decent though to explicitly state who is and isn't on the Guestlist.


What I’ve also seen is integrations with a different OIDC endpoint for company X. It’s still OIDC, but it’s not “sign in with Google”.


This will help, but at the same time will ruin the whole idea of seamless corp users registration in external service, and damage adoption and increase friction.


If a company has no directory of their employees, associated company email addresses, and employment status, they likely have much worse problems.


Most companies have these directories but in different forms, including Wiki pages. The idea of auto-enrollment is that it is based on a standard and widely adopted OAuth protocol.


Employee records as wiki pages? That must be a joke, right?


Why not? If your company has lots of employees, like tens of them, you may create a nice template for those pages.


Because it’s not suitable for machine-processing of employee data. This belongs in a database, Active Directory, or HR software. Do you also do payroll with wiki pages?


Not everything should be suitable for machine-processing. A certified MS professional should administer Active Directory. HR software should be paid for and learned how to use it first.

On my second startup we payrolled via Google Spreadsheet. By the way, it is also an okay storage for employee data.


I suppose I'll confess that you are correct, this approach is totally viable at smaller scales. I've just seen what happens in a large business context.

It's above all, a security nightmare, since the wiki pages are perpetually out of date and incomplete, and it makes the process of thing like provisioning and deprovisioning access so labor intensive.


Why not on a piece of paper then?..


It's hard to share and update.


They sure have those directories in different forms out of sheer need but every time I’ve seen people have to consult wiki pages to get information that should be in a standardized directory it’s a shitshow that is borderline impossible to automate.


> Unless you're sending a verification email for every sign-in...

Yes, absolutely do this! This is what Slack does, and what we do at my current employer (defined.net). "Magic link" email + TOTP is pretty slick.


Please don’t do this, or at least try offer normal login alongside it. This is really bad ux. Instead of being able to have my password manager sign me in without me having to do much, I suddenly have to open my email (which I might have closed because I want to focus), go to the email - god forbid I have to wait 10s for it to arrive - and then it opens a new tab for me…


> "Magic link" email + TOTP is pretty slick.

Agreed. But unfortunately it's also highly phishable.


Can you explain how you would phish a user with a magic link? Since the service is generating a one-time code, and sending it directly to the user's email inbox, I am not sure how an attacker would intercept the code.


The attack works by getting the user onto a page you control that looks like a slack page that says, "we need you to confirm your email". User enters their email and gets a legitimate email from slack. User enters the code on the original phishing page and the attacker gets a link that lets them log in as the user. I built this exact exploit for slack in a few hours. It was trivial.

I've never seen a foolproof way to mitigate this. Best you can do is big warnings in the email telling the user never to enter the code anywhere but slack.com. You can also do fancy stuff like comparing IP addresses to make sure they're from the same region but the attacker can also do fancy stuff like detect where your IP is from and use a VPN to get an IP in the same area.


The foolproof way is to not send codes as a 2FA (be it mail, SMS or whatever). There is always a risk that the user fails to verify where they're putting that code. Instead use something that verifies the domain without relying on the user, e.g. U2F or passkeys.

In that case the user needs to be fooled to sent the physical device or passkey-app-backup to the attacker. This is much more suspicious and needs a much worse fool than someone entering a code after they already entered their user+password.

If you know that the user uses the same browser to open links sent via mail as they use for their login: For the 2FA step, set a cookie with some unique value on your login domain and sent the user a mail with a link. Opening the link only finishes the login and starts a valid session if that unique cookie is present. This makes it harder for an attacker, since they need to inject that cookie into the victims browser, which means they need to find an XSS-style exploit. Of course you then want to reduce the attack surface by putting the login function on a subdomain of its own.

And obviously this fails if the user is about to login e.g. on their personal computer and then tries to verify the session on their company phone. This can be good or bad, depending on the scenario.

TBF, I wish some companies would use even that basic code-by-whatever 2FA. I've seen cases which have like 5 different domains, all with various logins that customers and employees use. Want to phish them? Just register another domain that looks similar enough to the others and sent some mails. But then there are still services limiting the password length to something like 16, so I think we will still have plenty of work...


Passkeys are cool, but not widely deployed yet. Also, there's currently no way to express "give X identity Y access on Z server", unless X identity already has a passkey set up with the server. This is a non starter for decentralized networks with lots of federated and self-hosted servers. This use case is trivial with email.


Thanks - but this sounds like an email 2FA flow, not a magic link.

A magic link is a link a user clicks in their browser, that lands them on the appropriate service, where the one-time code is part of the URL. The service consumes the token and provides the user with a (first factor) authentication token.

In other words, the email doesn't display a code which they could go paste into the attacker's page. Though they may still need to perform a 2FA flow following the magic link flow (and this portion is still phishable!)

Your critique is definitely valid for most forms of 2FA (email, SMS, and TOTP.)


You are correct that this mitigates the security problems.

However, the method you're describing has fallen out of favor, in large part because mobile email apps often use a built-in browser that doesn't share cookies with the system browser. This creates several confusing UX problems. You also can't use a logged in device to log in a new device, unless you implement something like QR login which is also phishable.

Slack for example used to work the way you describe but now uses emailed codes for 1FA login.


As with sibling comment, what threat vector do you see phishing risk with?

A race condition where the phishing email lands first, user clicks link to g00gle.com, gets a convincing message that they also need to present username and password?


See response to sibling


Thank you - as sibling also mentioned, what you're describing in isn't a magic link but a standard TOTP/HOTP delivered via email which absolutely is phishable in the manner you described.

Magic link is a process where you enter your email address and the service sends you an email that contains a clickable hyperlink that contains a cryptographically strong, short-lived nonce in the URI that is used as a proof-of-possession factor (the email account) to authenticate users.


See third cousin


looks like you replied to the wrong comment


I hate this experience. It's an absolute pain when emails are delayed and sign in fails, or just when some apps fail to persist state when I switch to my email client on mobile.


MoneyGram recently lost my business because of this. Their auth emails weren't coming through. Couldn't complete my transaction. I have a 20+ char password. Dunno why they have to dick me around with email auth. Use TOTP if you must, it's also annoying but at least it doesn't fail.


> Unless you're sending a verification email for every sign-in...

Isn't that your typical 2FA flow?


But in this particular case the attacker will actually receive a confirmation email, so what's the point?


> always send a confirmation email from your own side before marking the address as verified in your database

This would render the login with x feature useless from a user pov.


Would be be useless? Wouldn’t the verification email happen once, then after which you gain all the SSO benefits?


You still get the benefit of not remembering an extra password.


Why couldn't Google prevent signups using domains of Google Apps customers? That doesn't seem like it would break anything.


because the collisions can already exist when someone signs up for google workspace. and what are you going to do, delete those accounts? a lot of those would be personal accounts on educational domains...


That's not a reason to not allow individual signups on @customer.domain AFTER customer.domain is a Google Workspace domain, which is the hole being discussed in the article.

FWIW Adobe actually lets businesses take ownership of individual accounts with the same domain... see https://www.adobe.com/legal/terms.html section 1.4


Google has an ownership taking process, the user gets informed and during next login their address changes to a gtemp account and they are informed


I used to have a google account with my personal domain then I registered to Google Workspace 4-5 years ago. Since then, I have this message when I log in :

> Your account has been modified. > The address user@domain.com is no longer available because an organization has reserved rickynotaro.com. Why is this important now? > > Don't worry. Your data is safe. To use them, you must create an account with a different email address. Your password and security settings will remain unchanged.

> Account details > > What type of account do you want to use? > > An account with Gmail and a new Gmail address > Select this option to add Gmail to this account. Unfortunately, we cannot move your data into an account with an existing Gmail address. > > Account using an email address belonging to you, not linked to Google. Ex.: myname@yahoo.com > Choose this option if you want Google products except Gmail. > Btn_Continue Btn_Do_it_later Not sure what to do?

I've been pressing "Do it later" for years. I'm still using this account for youtube, maps and other services.

I should probably use a secondary domain and use this address.

Some service prevents me from changing my email address when signing with google and weird behavious happens when it tries to use my user%domain.com@gtempaccount.com

Google documents it here: https://support.google.com/accounts/troubleshooter/1699308?h...


Related: More service providers need to stop using email as the primary identifier (as Google’s docs recommend). When I changed my username on Google Apps, I spent a lot of time dealing with issues at Slack, Datadog, GoLinks, and others.


What should providers be using?

I've always presumed email was the most stable global identifier for a user, but that assumption appears to be wrong.


If you're logging in with OIDC (as is the case w/ the OP), the combination of the issuer and the `sub` claim identify the user (the "subject"). The relying party (the system wanting to authenticate the user) just treats sub as an opaque string (unique within that issuer). The `email` claim is just the "End-User's preferred e-mail address." (And … "The RP MUST NOT rely upon this value being unique" … the end user's email might change … for whatever reason the end-user, or their IDP, might prefer. Also, any other IDP might claim that email as the preferred email, potentially truthfully, too.)

The docs, however, seem to be discussing the notion of the relying party's "user" object. For that … use a UUID, an auto-incrementing int, some artificial key, totally up to you. Link that user object with the (iss, sub) tuple above. But you should consider¹ whether user can adjust their authentication method with whatever you're building: e.g., if I change OIDC providers … can I adjust in your RP what IDP my account is connected to? (Same as I might need to update an email, or a password in a more classic login system.)

(¹The answer is also not always "yes", either; I work with a system where the IDP is pretty much fixed, because it's a party we trust. All signins have to come from that specific IDP, because again, the trust relationship. But, on the open web where I'm just building Kittenstagram, you don't care whether the user is signing in with Hooli's IDP, Joja's, etc.)


This is how we do it- sub+issuer associated with an account in our system. The user is issued a username for our system, they enter that then they are presented with the login options (eg password, IDP providers, etc). This also forces the customer organisation to specify exactly who they want to have access (which in a org with 10k+ employees of which only a few dozen need to login to us, that’s a good thing).

Plus this approach allows multiple accounts each associated with the same IDP account. Useful if the user needs multiple accounts for whatever reason.


I get it, but this is also, frankly, terrible. I should not be required to store your identifiers in my system in case order to login users.

I've always felt that email+email_verified would make much more sense.

I don't actually care about the email address being a unique person, just that they have access to it.


The email address is not guaranteed to be stable.


I get it, but you're throwing technical specifications at a product/human/application problem.

No one wants to build an application that has to invent its own id scheme or manage this complexity. That fact that the specs don't provide a solution here -- something like informing you when an email address is no longer valid (again, I get it, this is hard/impossible) -- means that the spec will always be in conflict with actual usage.


Just use an integer or GUID or something as primary key. It is still totally fine to use an email address as username, of course - just keep a separate email-to-user mapping and don't use the email itself as primary key.

Treat the email address like a name field: it's probably not going to change, but don't make it impossible to do so when someone wants to.


> Just use an integer or GUID or something as primary key

We're not talking random apps and services, we're talking about the big providers that are commonly used for SSO, where "just change ur primary key" is wildly impractical at best, and more likely impossible at their scale. That ship, as it were, has already sailed.


Those providers already don't use email addresses as their primary key: their login APIs all allow you to get access to an underlying ID of the user (I say "an" as, in the case of Facebook at least, they no longer give you the global one but map you to an application-specific one, to prevent 3rd party apps doing correlation).


I changed my last name when I got married, and at the time I had a work email address that looked like oldlastname@company.com on the company's Google Workspace. Given that oldlastname was no longer my name, I changed my email address to newlastnmae@company.com.

This worked fine in Google services and some of the many work applications using Google for auth, but some of them were using the email address as a global identifier in a way that broke down when I changed my name. The services that migrated successfully were using a more stable identifier that persisted despite the address change.


I have a comcast email named after an old movie monster. Turns out that a former comcast customer once had the same account, and registered it with google (and others). Not only can I not use that address with google services, creating google calendar events using my email will sent the notification to the other guy (as a result, I created another email address for job interviews).

Every year I get several notifications that this guy has done something with his x-box, or registered a new device for something or other. It is absolutely nuts, and companies like Google refuse to let people like myself claim our own email addresses.


If you own the Comcast email address, how is he registering anything with it? Wouldn't he need access to the email to verify his accounts?


Because a shocking number of services don't actually require you to verify you control the email address. I'm talking about mainstream services like Spotify.

I'm in a similar situation, except in my case, the person providing the wrong email address never controlled it - it's firstnamelastname@gmail.com, which I signed up for when gmail was still invite only.

If I go to sign up for a service that someone else has signed up for with my email, I just do a password reset, and take control of the account. Either by transferring it to another email address that doesn't exist and then creating a new account, or if that doesn't work I just nuke the data.


He used to own the account, now I own it. I've tried resetting the google account password and the like, there seems to be nothing I can do.


> I've always presumed email was the most stable global identifier for a user, but that assumption appears to be wrong.

It might be the most stable, but that doesn't mean you can assume that it's at all stable.

Also, email addresses can't be assumed to uniquely identify a single person -- lots of people share email accounts with others.


The subject identifier, "sub".


Microsoft basically has no useful sub (sub is only useful when it comes to app credentials for Microsoft). It has oid, but if you want to support n-providers with just a login field and not a sign in with n button, than you have a problem. Or should the user know its entry lid and insert it into the username field? Most of the time you use an email and what Microsoft does in c# and their docs mean that you connect that with the oid. And also that’s why it’s stupid that Microsoft and google do not treat the email/preferred_username identifier that well. Because everybody changes the oidc spec


Is Microsoft's sub claim unstable?

I think you might have misunderstood the point. Miscellaneous claims like email/preferred_username shouldn't be used to identify 3rd party logins. Apart from not necessarily being unique, they're also vulnerable to change. Changing your email shouldn't make you lose access to all your accounts. The point of the sub claim is that it's it's unique and stable.


No the sub is not unstable, it’s just the sub is unique per client_id.

yeah I know that. We basically do both. You create the account with the email/upn but we also save the oid and than we use the oid for matching. If the email changes we update it. If you started your account without the provider and than somebody configured domain+tenant id we first match via upn and after the first login it will use oid. User still uses upn to start the flow but the matching uses oid. But we are only dealing with b2b tough. And we have our own login site that of course needs a upn as well, thus the upn of Microsoft is the same as ours. If you change the upn on the Microsoft side you need to change the login upn on our side aswell. Another solution would‘ve been to have a unique logon site, in this case it would be possible to directly go to the IdP, but it does not matter that much with login_hint.


Curious to hear answers to this too. People forget usernames and that alone would lead to drop off. People are also reluctant to give out phone nos. which is anyway a terrible identifier and has the same issues as email.


Why can't the phone number be unique? I have a unique phone number and trust them as my email unless I transfer myself out of my country.


Another edge case is that phone numbers get transferred and reused.

I might verify the number, and then stuff happens in the real world while I haven’t logged into your service, and someone new registers with my number.

The number is assigned to my account, but the new account can prove they own the number. If you’ve made the number unique, then you need special handling here.


Just like with email addresses, there are a nontrivial number of people who share the same phone number.


Phone numbers are also recycled, probably much more often than email addresses.


I disagree with GP. I think email is generally a solid identifier, and would be curious to know why they needed to change theirs.


Because comcast subscribers lose their email address when switching email providers, and the address becomes available to other subscribers.

This happened to me, and I cannot use my comcast email with google services, and some others. I need a separate email address for job hunting, because google calendar will assume the provided email address is linked to google services, and the notifications will go to the other guy.

It is really fucking annoying.


This is solvable by using a more stable email provider, ideally with your own domain. And yes I know domains need to be much easier for the average person to use (and avoid accidentally losing). That's one reason I run a domain registrar, to try and make this more accessible.

Once someone has their own domain, it also opens up things such as hosting your own IdP (or paying a small monthly fee to have someone else host it for you) and sidestepping email entirely.


> This is solvable by using a more stable email provider, ideally with your own domain.

Sure, but requiring ordinary people to do this is essentially a nonstarter. The whole point of SSO is to minimize user friction. Requiring a user to also set up a special email account with another service is a dramatic increase in friction, and I expect that a large percentage of users simply won't do it. Why would they?


The main problem would be solved by using a Gmail account, right?


also known in industry as Account Takover


I can't agree. This guidance just pushes responsibility down onto application developers and I would expect most of them to either do nothing or implement guidance inconsistently.

The right thing here is to offer APIs that fit the needs of the applications that will use them with as little extra responsibility as possible.

In this case, I'd have hoped that Google would set email_verified to false so that applications (or downstream IDPs) would know that they had to do extra verification.


The OIDC spec does provide a stable identifier, and Google implements it properly. My Google Workspace address will always be verified, so your solution doesn’t apply. Changing my email address at Google itself won’t unverify it. Also, you’re still relying on developers to read the docs and properly implement the spec.


I think you missed the point. No application developer wants to use "sub" as the identifier; they want to use "email" or "phone" because these a) are actual ways to message a human and b) do not require a deep understanding of any technical spec to do the intuitively obvious thing.

I am not saying that my solution works today. I am saying that is a completely natural thing to want and the fact that it doesn't work or that we're even having this discussion is failure of the people who designed and implemented these specs.


I don’t see it as a failure of the spec, but developers failing to read said spec. By the way, I’m a developer who does want a stable ID for users authenticating via third-parties. The fact is that email addresses and phone numbers can change, and should not be considered stable identifiers. If folks want to extract that information from an ID token, they can; but, don’t use them as a primary key.

No deeper understanding required.


Why is it even possible to create a new Google account with an email like 'user+suffix@domain' if 'user@domain' is already handled by google's mail servers and thus applies the plus-routing rules? Even in the non-exploity case that seems like a great way to create confusing mail setups.


A domain can freely move between mail servers. Google has a specific handling for a+b@domain.com, other servers might not. At the end of the day they are two unique email addresses, and that's how they should be treated across the internet.


I think this aliasing feature is too complex for its own good. Especially at Google's scale.


> user+suffix@domain

It's even worse than that. At least the +XYZ is specified in the email rfc. Google has decided even further that periods in the name also go to the cononical email. ie hi.my.name@google.com is equal to himyname@google.com and routes all emails to the second.


Another fun one is upper case vs lower case. I’ve been bitten by systems that are case sensitive, while the rest of the email world mostly is not.


I could swear I at least once received an email that was sent to myname.mydomain@gmail.com or something similar in my myname@mydomain email. It's been several years but I remember thinking that was fucked up and looking into the full email to see if there was any other explanation for me receiving it, which I did not find.


Google (and most email providers) also treat the user portion as case-insensitive.


Because of how old and legacy googles authorization system is. A “Google account” is just a string.


> I remember being surprised to learn that Microsoft would send Email claims that were not created or validated by Microsoft, and that the email claim in general was not considered reliable.

> This was counter-intuitive to me, because I had thought the entire purpose of OIDC was to establish reliable identity via a 3rd party like Microsoft.

That might have been the original intent, but I find it very useful that OIDC can be more flexible. For example, I run a free login provider[0], and it works by validating an identity with a 4th party identity provider (IdP) either with upstream OIDC or direct email, and creating a privacy screen between the app and that IdP (ie so Google can't track every app you're logging into). The fact that you can bring your own email to Google means you can get the security and UX of Google OIDC with the privacy of email + password, with the huge caveat that now you have to trust LastLogin instead of Google. But we're working on protocols to reduce that dependence.

> Google’s documentation in fact warned against using Email as an identifier

I completely disagree with Google on this. Email is the only truly federated identity that people actually use. Until we have something better widely deployed (and there are some promising alternatives in the works), I believe email addresses should be treated as identities.

[0]: https://lastlogin.io


> I believe email addresses should be treated as identities.

No, for two reasons:

Outside of the western world, phones are more common than computers and easier UIs in general so a phone number is more likely to be their identity.

In addition, that means you're completely handing off your identity to your email provider. Considering many - looking at you google - are faceless organizations that can and will shut down your access without notice or appeal, you could lose everything.

Background: I launched Okta's OAuth and OIDC products, put together LinkedIn's courses on the same, and doing it again at Pangea Cyber.


> Outside of the western world, phones are more common than computers and easier UIs in general so a phone number is more likely to be their identity.

When I worked at Stripe, we found that far more people lost access to phone numbers than email addresses. And the reason is simple: if you can't pay your phone bill, you lose your phone number.

While going through support tickets to tabulate which auth issues we should focus on, I came across one person who had a Stripe balance that they needed to feed their kid. But they couldn't log in because they couldn't pay their phone bill and had lost their number. The very fine support folks got the situation resolved with other identity checks, but it was a huge wakeup call.

You simply _cannot_ use an identifier that requires ongoing payment for identity purposes. You and I are probably privileged enough to never have to worry about this, but everyone who falls below the lower middle class is entirely vulnerable to losing _everything_ this way.

> you're completely handing off your identity to your email provider. Considering many - looking at you google - are faceless organizations that can and will shut down your access without notice or appeal, you could lose everything.

Versus handing off your phone number to organizations that routinely get socially engineered to transfer phone numbers. This is such a common attack that my mom knows about it. Ironically, the facelessness of most email providers also protects you from having your identity yoinked out from under you by one of their staff: I don't personally know a single person whose had their email turned off as a result of social engineering.


Plenty of other reasons to "lose" a phone number. Especially temporarily. Accounts deemed inactive. Locked devices, in some cases.

What makes the situation intolerable is the proliferation of Google-inspired "customer service" designed to prevent any prospect of useful contact with paying customers. Kafka-esq nightmares are currently an everyday hazard.


I'm not advocating for phone number, simply saying that "assuming/forcing email is insufficient"

Giving people the option between phone, email, or whatever is a better approach so they can plan accordingly.


Another common issue is moving countries which often changes your phone number.


Unless you're using PKC, you're completely handing off your identity to someone. The question is what's your threat model for having your identity taken from you. For me I've decided I trust DNS, because if it fails we probably have bigger problems. So email on a custom domain is as strong an identity as I require. I see a Gmail account as good enough for most people, but not for me personally.

Ideally I would like to see people hosting their own IdP servers from their laptops at home over something like ngrok but e2ee, but we have a ways to go for that.

Aside: given your background at Okta and ngrok, we have a lot of overlapping work. I'm curious of your thoughts on my LastLogin.io project?


Sadly more and more countries require mobile service providers to check identity of the user before providing a phone number. Not everyone wants to be easily identifiable IRL. Meanwhile, unique e-mail addresses are available by anyone without proof of identity.

And the goal more often is to identify a unique user, not have a user that can be traced back to real life.


> I believe email addresses should be treated as identities.

Because nobody ever shares an email address, and email is super secure

We can wait forever or we can begin building the solution


Sharing email identities is a reasonable way to give group access to a resource.

And solutions are being built. The mission of LastLogin is to accelerate this.


There's an important difference between email as an identifier in your own system, versus email as an identifier for a linked Google account. I agree with you that email works okay as an identity within a system you control, but agree with Google that it's a terrible way of linking with an external system: ephemeral, potentially tied to multiple accounts (!), and pointless when you've got an actual ID to use instead.


That's fair. I'm coming at this from the perspective that using Google SSO is great for UX, but a bad idea for privacy and vendor lock-in reasons.

I built LastLogin.io to provide the SSO UX without sacrificing privacy and to eventually move towards better protocols for user-controlled identity


I think it would be really great if you could compare/contrast LastLogin with Dex. Especially given they're both Go. Just curious if you evaluated dex, etc.

Also toyed with... Portier (nee Mozilla Persona). But ultimately dealing with this email normalization seemed like a losing/hard battle. And I mostly accepted that I'll likely die before a good solution is pioneered here and have refocus attention elsewhere. :/



Hats off, what a README.


Is this really a Google OAuth issue, or more failure my many service providers to properly verify the OAuth token assertions before allowing access? Seems to me the latter.


It sounds like the issue is that these service providers are obeying Google's aliasing rules, but also ignoring the fact that you shouldn't be using email as a primary identifier [1]? It's funny, if they had adhered to the spec more they'd be fine; but if they adheredess and treated alias' as distinct emails, these platforms would at least be more secure.

[1] https://developers.google.com/identity/openid-connect/openid...


I believe OAuth is working as expected. It provides valid authentication/identity for email addresses because "user@domain" and "user+wildcard@domain" are still validated as email addresses "owned" by the user.

The issue is with the Google org website: admins cannot revoke credentials for accounts/emails they cannot see.

> Because these non-Gmail Google accounts aren’t actually a member of the Google organization, they won’t show up in any administrator settings, or user Google lists.


> Following this flow, you can create a Google account using a support ticket email address, potentially view the contents of the ticket to finish the account creation, and start using the support email address to Oauth into stuff.

That could impact lots of small companies.


My main takeaway from this is that web authentication is still a horrible mess.


…because people don’t read the docs and instead just assume that it works how they think it should.


Have you seen the oauth docs? I can’t imagine anyone having read and understood them fully, unless you dedicate your life to it.


Half of it I think is because people take "basic auth" offered by web framework, and then try to retrofit OAuth/OIDC/SSO on top of it.


If so many people are making the same mistakes, it’s your fault, not the users.


Sounds like a classic footgun to me.


If people don’t read, it’s their fault. Reading the docs is not a big ask.


Documentation (including code comments) are vital and important, but it's far better to bake the proper constraints into the code/specs so that it's hard to make mistakes. Then the docs are less necessary, shorter, and easier to understand. I think this is what GP is getting at.


Typical engineer spotted.


I don't think that's engineer so much as lazy bureaucrat in power. It's the mentality Douglas Adams made fun of with the "Your planet is being destroyed, you had plenty of time to read the posted notice".


How dare we read documents before we put technology into use.

Like asking a bridge engineer to know the spec of his bolts.


And if every purchaser of said bolts implemented them incorrectly, is it more likely that your specs, docs, or bolt design are faulty? Or do you just think no it’s everyone else who is stupid?


Except in this case the vast majority of people have read the docs and then gone on to implement them correctly.

I’m sorry that you wrote buggy code because you didn’t read. But it’s the height of arrogance to assume that because you did, everyone else surely must have as well.


This is why login is a horrid mess. Because if it's too easy then people who don't know what they are doing set up websites.


When some people ask why most of us sane and practical folks still use and demand simple password authentication, it's because passwords fucking work.


I'm still firmly in the mutual TLS camp. Nothing is easier then never having to type in a password and good luck cracking TLS.


So much this. OAuth and their ilk are, in my opinion, not trustable and suffer from real usability issues.


The OIDC spec tells you that you must not to use e-mail as a unique identifier. You must use the 'iss' and 'sub' fields as username in your application.

Why? Well, for a start, it's obvious that user e-mail addresses can be re-used. If you've got a contractor working for Business A and Business B, both who create a user account in their authentication service for them, then as a SaaS platform, you can't match their e-mail address to a single B2B customer.

Secondly, there's the really obvious thing that e-mail addresses change. Businesses get bought, change name, go through mergers, etc. etc., and people's names change too (marriage, divorce, because they feel like it).

I found implementing SSO to be really challenging for a start-up. Getting it correct is hard, and you need to have a good understanding of the general concepts and OIDC and OAuth2 before trying to put it into use. Auth0 have a good book. If you don't understand this, then you'll probably end up doing something like implementing password grant auth everywhere and leave your application insecure.


Sometimes reading articles like these are a good well to alleviate any accumulating imposter syndrome. "Oh, I'm interfacing with a third party system for something that represents an abstract actor. it better have a stable, non-stringy reliable identifier". And yes, the spec is very clear about this, beyond just basic considerations of building a remotely robust system.


A microcosm. There is no such thing as OAuth2. OAuth2 is just a way for megacorps to implement their own arbitrary/proprietary auth systems. It is a toolkit for making auth systems, not an auth system. So we end up in a world where oauth2 was supposed to be a standard, but instead every megacorp has their own incompatible implementation. And sure, people will dev for the megacorp use cases... but that just means the "standard" is now whatever google does/etc.

And they all have their own little bugs like this. We should go back to oauth1. It was a real standard. Not a toolkit for making standards.


I wouldn't really call this a bug, more like an unfortunate side effect of combining these particular components: domain names that can change ownership, BYO email (as backup email & email provenance), the liberal allowance of plus-aliases (which I'm sure someone somewhere is claiming a business need or they would have killed it long ago), and service implementers not reading the documentation (or largely copying solutions from a video or example with cut corners for brevity/simplicity, likely to facilitate its easy consumption).

If I were designing a circuit with a few PCB components and needed to introduce resistors and transistors as appropriate for the voltage and current needs of the device .. would you expect me to read the data sheet or just guess it from a simpler example and run with it? In a lot of cases the circuit would still work, or it would after a few bench tests and a bit of probing. But maybe it wouldn't be as efficient and a component would short out leading to low MTF and sad customers. Worst case scenario maybe combusting batteries and real harm. Now ask yourself, is it really the PCB modules' manufacturers fault that the device fails prematurely? Or is the device manufacturer the one responsible for reading the data sheet?

I don't typically hold all software to such rigorous expectations but when it deals with authentication and authorization I would expect service owners to be thorough.

TFA even says that the issue doesn't exist if the docs are followed. Alphabet did at least acknowledge there's a weakness there by granting the bounty, maybe they'll provide some controls for company administrators to allowlist/rejectlist plus-aliases or nonexistent roles, or maybe restrict the migration of Apps-affiliated emails to non-org claims? (My guess is they're measuring the impact of this, or prioritizing the measurement of impact, where priority is low because it is a problem with clients that assume email claims are more authoritative and permanent than they actually are).

I suppose the definition of "bug" depends a lot on the definition of "expected" and who's expecting, but I would assert it is not a deviation from intended behavior, at least, and not unexpected to those who grokked the docs.


Um...okay?

Is cross-provider compatibility related to this article about the security of Google OAuth2?


What am I missing here? Outside of the support system/zendesk and unattended old domain methods identified I can't make a new Google account for whatever@mydomain.com without being asked to verify it - so what's the real likelihood of abuse?


The idea is that you do this in advance, at a time when you have legitimate access, then you later lose that access.

So say you have a egamirorrim@mydomain.com google account legitimately. You can use an alias like egamirorrim+woopsie@mydomain.com to create a new google account with a verified email address, resulting in "log in with google" google sending an email claim egamirorrim+woopsie@mydomain.com.

Then, later, mydomain.com fires you. You can no longer log in with the real egamirorrim@mydomain.com associated account, as it was disabled by an administrator. However you can still log into the new google account, egamirorrim+woopsie@mydomain.com , since it's not associated with your organization.

The thing is, afaict this then only becomes a problem if the provider is doing authz based exclusively off of the email claim. I've used OIDC in the past and you are not supposed to grant access to resources based on parsing text in email addresses claim!

I can understand why the blog post author found this counterintuitive, but as they note the docs even warn against doing this.

The blog post goes on to make this statement:

> Most of the service providers I tested did not use HD, they used the email claim.

... OK, well what are they "using" it for? Does this trick actually work on any real world services? If so, I would like for them to be named and shamed.

Even if you (erroneously) assume this value is unique and immutable, that alone doesn't necessarily grant access to anything in and of itself.


Perhaps my assumption is faulty. If an email is sent to egamirorrim+woopsie@mydomain.com it would go to egamirorrim@mydomain.com which you no longer have access to. Do you simply never again need to receive an email at egamirorrim+woopsie@mydomain.com?


> October 5th- Google paid $1337 for the issue

love the tongue-and-cheek "leet" amount of 1337


Well the best solution is basically to allow the creation of the account but keep it deactivated so that a human needs to check it. That at least works for things like gitlab or other things were an Organisation signs up. The problem of the hd claim is actually not one since you need to validate your domain and if your a saas provider that is b2b only that’s ok. Microsoft is even worst tough, where you need a different claim than email, depending on what you are doing. (UPN)


I may or may have not used a version of this to sign up for multiple free trials - for multiple products.

sign up johndoe@gmail.com then next time johndoe+1@gmail.com then +2 ad infinitum


This is a feature, not a bug. Anyway, what is to stop someone who owns a domain from doing this with actual forwards to do this sort of ban evasion?


I'm struggling to understand how to reproduce this.

I have a Google Workspace organization: org.com

I create a new user: sneed@org.com

Where/when/how is the sneed+alias@org.com created? How is the user sneed@org.com doing this if they don't have administrative access to organization management?


Just try it, google will merrily redirect all emails to a+b@org.com to a@org.com. To answer your questions:

> Where/when/how is the sneed+alias@org.com created?

Where: in gmail's alias list

When: at account creation

How: gmail/gw redirect everything of the form abc+xyz@domain to abc@domain

> How is the user sneed@org.com doing this if they don't have administrative access to organization management?

The user isn't doing anything, it's a "feature" of google mail.


Using emails like this is usually found during pen tests. Not Googles fault I would say even though I think OAuth is overly complicated. This is along the line of sending secrets in the token. Tokens are signed, not encrypted.


I use OAuth2 all the time. And I don't understand the conflation of email and OAuth2 being discussed in that article. With Google's OAuth2, I can get the user's email - but so what? I have no need for it and I never use it.

"Because these non-Gmail Google accounts aren’t actually a member of the Google organization, they won’t show up in any administrator settings, or user Google lists."

I don't understand that statement either. They do show up. Now of course the org could choose to not do anything to manage the access of those users - which is common enough. I made a tool used by some of my larger clients to a) get reports of users and their permissions (available via Google's APIs) and b) batch delete those user permissions.


Anyone else unable to access what I assume are supposed to links for the centred bits of text throughout that article? I had a hard time understanding parts of it...


Turns out, once I got to read it on desktop machine, they weren't links at all but captions for images that weren't showing for me on mobile.


Not that I am being vain (though I actually am), but how's that my post of the same link 4 days ago got just one upvote? :)

https://news.ycombinator.com/item?id=38670644


Whether or not a link gets any attention here depends on a lot of things aside from the link itself. For instance, if it happens to hit the front page at the same time as something else that is drawing everyone's attention then it may simply go unnoticed.

Also, although this isn't the case here, if it's just a link without a description or some sort of commentary explaining why the link is of interest, it may not get the traction you'd expect.


What's the actual vulnerability? Steps to reproduce?


> * August 7th - The issue was triaged

> * October 5th - Google paid $1337 for the issue

Is it just me or does it seem a bit odd that payout after triage took almost two full months? Initially I was positively surprised that they came up with a triage verdict within 2-3 days but what's the deal with the payout coming so late?


Not sure about Google VRP, but I've gotten multiple payouts from Chrome over the years and I believe there's a schedule. The rewards panel meets every x weeks in order to award payouts on qualifying reports. Almost no bug bounty programs pay upon triage by the way, they pay after resolution.


I run a bug bounty program and I pay upon successful triage: while our engineering teams do have security SLA’s, it’s not fair to whomever reported the vulnerability to wait for our (sometimes broken) processes in order to be paid.


It's pretty normal for large companies to take ages to pay up. The real problem here is this major bug only elicited a token $1337 payment.


> The real problem here is this major bug only elicited a token $1337 payment.

Agreed.


Yeah, definitely should have had a higher payout.


Is it just me that feels $1337 is an insult? FAANG pays way too low bounties for this kind of stuff. This kind of info would be much more valuable on the black market.


To be clear, the information shows a rogue employee how to create accounts in third-party apps (Slack, Zoom, etc.) that won't be automatically deleted when the employee is terminated. I'd love to hear why you think this information would be "much more valuable" than $1337 on the black market as that is not obvious to me.

Also, if anyone should be paying bounties, it's the third-party apps, since they're the ones which are vulnerable. I'm impressed Google is paying a bounty just for pointing out a footgun. I would probably not have bothered reporting this to Google if I had found it; $1337 would be more of a pleasant surprise to me than an "insult".


In fact I'd argue that Google paying a bug bounty for something that is well-defined and documented behavior and will never be "fixed" actually undermines the program.


> Because these non-Gmail Google accounts aren’t actually a member of the Google organization, they won’t show up in any administrator settings, or user Google lists.

That's why. This bug allows an attacker to retain access to various accounts attached to an already-compromised company or employee of the company. Not only that, but the retention is completely invisible to the account administrators.

Needing the same level of access that an employee has in order to utilize it doesn't make it less valuable. There are plenty of valuable bugs that can only be utilized from specific positions. Consider how many hacks have happened because an employee's devices or accounts were compromised, rather than some server system that no one individual owns. The recent Okta hack happened that way.


The rogue accounts would show up in the administrative settings in the third-party apps, and they would stick out like a sore thumb because they'd have weird email addresses. So they're not completely invisible, albeit not visible from one central place.

> Needing the same level of access that an employee has in order to utilize it doesn't make it less valuable.

The only way that would be true is if compromising an employee account has no cost, which is obviously not the case. Thus, attackers would prefer to purchase a vulnerability that doesn't require also compromising an employee account.

I trust tptacek is correct that Zerodium wouldn't even pay $133.70 for this: https://news.ycombinator.com/item?id=38722395


Buy why should Google pay them at all? One of the first screenshots of their documentation says you shouldn't trust the email claim, so they're obviously aware of this issue. The problem is third parties using Google's OAuth incorrectly. If anything, Slack/Zoom/etc should be paying.


Nope, I'm with you. Based on the quick blurb about what the vuln way, $1337 is an absolutely steal for Google. Paying for a team or outside pentesters to attempt to find this would be _way_ more expensive.


> Paying for a team or outside pentesters to attempt to find this would be _way_ more expensive.

But doesn't Google have teams of internal pentesters already? You could hire dozens of external companies and they might not find it.

This system is a "no cure, no pay" approach. I do think they should have paid the reporter a lot more though.


It’s essentially 0 as far as they’re concerned.


Especially when Microsoft paid out about 75k for essentially the same issue.


Did Microsoft pay the entire $75k? The people who found that issue reported it to multiple stakeholders, and their blog post[1] merely says they were awarded $75k in total. I assume the bulk of the bounties were paid by the service providers who failed to heed the warning in Microsoft's documentation.

Also, the Microsoft issue was far worse as it could be exploited by anyone; the Google issue requires a rogue employee or a misconfigured email ticketing system.

[1] https://www.descope.com/blog/post/noauth


On a second read you're probably right about it being multiple vendors paying out.


From a practical perspective, they probably should "match" what black market values these exploits, and I surely wish they can give much higher bounties in general (and they for sure can afford!), but I don't think they ethnically need to (so it's not an insult in my view).

Turning these exploits/vulnerabilities to black market is not only immoral but also highly illegal, so the "value" is inflated due to these "risk" factors. You can't really expect the same from the affected company themselves.

It's like saying if you found a lost item and you ask a large sum from the owner when you return it, because "I can get much more if I choose to just sell it on the street".


> Turning these exploits/vulnerabilities to black market is not only immoral but also highly illegal

I assumed 'black market' here means irresponsible disclosure, which there are many sites operating legally (Zerodium being a prime example)

Who are the customers? Theoretically nation-state actors, but do we really know? Either way, you're selling the vulnerability to a private party. To my knowledge, selling knowledge of an exploit to almost anyone is legal (unless it could be classified treason or a threat to national security or something).

As is publishing the security research after responsibly disclosing (as the blog author did here), though we've had to fight pretty hard to get to the point where warning people of threats to their digital safety (often because companies are too lazy to protect their users) is generally understood to be legal.


I'm not a legal expert, but is it necessary for an act to pose a threat to "national" security for it to be considered illegal in places like the United States?

In my country, we have a law known as "The Crime of Destroying Computer Information Systems." This law makes it a criminal offense to intentionally harm computer systems in a way that could compromise them (which is somewhat vague in its definition, I'd admit). This includes leaking private information from these systems, and it applies even if the affected systems belong to private entities. And if you sell exploits to a third party and are later caught, you will be considered an accomplice and there are precedents for this.


The United States has similar laws in place. There have even been cases where people were convicted for responsible disclosure, since they had to circumvent the system to determine that there was indeed an exploit. It's not as common as it used to be, but there are plenty of small financial firms that would still go after someone reporting an exploit.


Zerodium isn't going to pay you $133.70 for this.


If you pay too much in bounties, you risk having your own red-team employees leave so that they can report bugs externally and get paid much more via bounties.


Could at least have been $31337.


Google hasn't fixed it, so it seems they really don't value this info.


Per the article, Google fixed it, but only for google.com accounts.


Might this be because to be actually vulnerable a company needs to have the ticketing-like system in a sort-of unsafe setup?


Eh depends on if the person is financially stable. The tongue and cheek number may stand out stronger on a resume.


"Is it just me that feels $1337 is an insult?"

Y0U 4R3N7 31173 3N0U6H 70 C47CH 7H3 R3F3R3NC3


So you're saying it's a joke on multiple levels.


They could have given them $313373 or at least $31337 instead.


A third of a million? You must be sky high.


I'd certainly try to sell the exploit to someone else instead.


> Today I’m publicizing a Google OAuth vulnerability that allows employees at companies to retain indefinite access to applications like Slack and Zoom, after they’re off-boarded and removed from their company’s Google organization. The vulnerability is easy for a non-technical audience to understand and exploit.

> October 5th- Google paid $1337 for the issue

Is that a joke? Does Google really value security so low?


If i read the post right, the behaviour in question was already mentioned in the docs before they reported this. I'm more surprised they got any money instead of a "its a feature not a bug" response.


Exactly. They paid for a detailed example they can point to of why one should follow the docs. Lot cheaper than a tech writer.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: