Why has the person who wrote this vuln, Zhu Shung, not made any commits/contributions since they pushed this out two months ago?: https://gitlab.com/memorycancel
Can somebody who what they were trying to do here opine on if this might have been malicious or was more likely just a honest mistake?
Thanks for doing the issue sleuthing. This is an excruciatingly bad look.
You'd have thought with all the code-owner functionality that GL has, they would lock down the `/lib/gitlab/auth/` files to require a security engineer to give additional signoff on top of a normal review. It looks like anyone at Gitlab can approve changes to the auth code (except LDAP): https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/C... which is terrifying if true.
I do so desperately hope it doesn't come across as throwing shade, because hindsight-2020-etc, but I do also think there was some kind of weird process breakdown here because this change somehow slipped past a "4 eyes" and an appsec review phase
This was already required in this case, you can see in the comments that an additional AppSec review was done and the appsec team signed off on the MR above and beyond the normal code review process.
Heh, was this you [1]? Pretty much asking the same good questions you brought up in a different post.
Additionally, I see that the Senior Director of Engineering, Tim Zallmann, has left a bunch of GitLab project repos about 14 hours ago as of this writing. He was one of the folks who tried pinging [3] Mr. Coutable (he's one of the reviewers that's currently OOO). The ping is likely regarding the discovery of the security vulnerability.
> I could easily imagine myself missing this when reviewing the PR "oh it's just changing a whole bunch of specs, go ahead".
That's one of the major benefits of the "tree view" in MRs, because one can collapse the "spec" folder, collapse the "ee/spec" folder, and it leaves "db" and "lib/gitlab/auth" visible which should for sure set off mental alarm bells: https://docs.gitlab.com/ee/user/project/merge_requests/chang...
An integration test that creates dummy accounts using every method including SSO and then attempts to bruteforce the password should find “12345678” within an hour.
I think a test like this would also have found the dropbox and macos bugs that let you login to any account by using an empty password:
Edit: Oh, the password was "123qweQWE!@#000000000". Technically doable with an efficient password cracker that favors common patterns. zxcvbn’s entropy estimate says it will take 10^10.5 guesses. That’s 1 week at 50k/s. That’s a hell of an integration test for most software.
Which then opens you up to exactly that class of bugs that also caused this issue. Having test specific code and feature flags and then testing a tweaked version isn't really covering all the cases then.
Just like in this case where a hardcoded password was set to maybe log in through a test based on the naming "test_default".
Posted some of this in a sibling comment, but there are a few ways you can address this (not all viable for Gitlab).
First of all, in code review, having merge requests touching security-sensitive code (e.g. /lib/gitlab/auth) require review by a security engineer using CODEOWNERS is the obvious process gap.
If you're paranoid, have security engineers review the diffs on all releases before they are shipped (in practice the security engineer would filter the `git diff` down to just the files you care about like `/lib/gitlab/auth/`). This is expensive, but probably not as expensive as "whoops, our Chinese subsidiary completely broke our auth for 3 months". If you have a good enough bug bounty program then I'd expect your pentesters to spot this in less than 3 months, as reading git commits to auth code is a good way to spot potential bugs as they are committed. This is hard if you're doing continuous deployment (I don't know if GL do this) but since they have numbered releases, I'm guessing (hope?) they don't run gitlab.com off the tip of master. If you're doing Continuous Deployment you need to be damn sure that your CI/CD pipeline is rock solid, and you have the right reviewers and controls on the sensitive bits of the codebase.
These are processes; you can also make this kind of error less likely at the architectural level. A general approach for larger applications is to split Auth out into its own service with its own repository, release cycle, restricted set of contributors, etc; while microservices are probably somewhere between Peak of Inflated Expectations and Trough of Disillusionment, splitting out services that require a strong security boundary is a good use-case. You can even use an OSS off-the-shelf system like https://www.keycloak.org/ to avoid having to build your own LDAP/SAML/JWT authority. (Or outsource this to Okta, though that seems a bit more dubious these days).
This is harder for GL because they have the FOSS monorepo and splitting out auth code would be awkward. But Gitlab currently deploys a bunch of components from the monorepo (https://docs.gitlab.com/ee/development/architecture.html) and if there was a separate auth component, a paranoid security team could reasonably deploy their auth service from a fork that they update in a more controlled fashion, instead of just auto-deploying all changes to security-sensitive code as appears to have been the strategy here (if security engineers actually looked at this change and OK'd it, there are bigger problems).
It’s easy to spot in hindsight. I can see how it slipped through if they didn’t actually closely read the changes in each file, it’s easy to let your guard down with many-file changesets like this. (That’s a cognitive bias that should be kept in mind when doing security review, particularly if that’s your only control in place to prevent issues like this).
Fundamentally, “fix tests” changesets like this should not touch non-test code and anything touching /var/lib/gitlab would be a big code smell to me if I was reviewing.
The advantage of the code owners approach is it highlights that there were changes to the prod auth code (shows the rule that triggers in the MR I believe), which gives you an additional chance to spot issues to focus on. Seems they also have bots running checks and so they could have the bot ask for an extra review if …/auth/ was touched. Test helpers shouldn’t live next to prod code, for obvious reasons, so the “prod auth code owner” rule being hit would have been an alarm bell on this MR.
I do think it’s concerning that security review was given here, so I’m keen to see the post-mortem and see what other gaps they identify.
I thought I had an account there for a while, I don't know how long (2-3 years?), and all of a sudden I got a password reset out of the blue regarding this particular change. No other emails in my history regarding gitlab whatsoever, so it's even possible I didn't have an account at all.
I go over to the website try to login with my gmail account via SSO which fails because I "already have an account". So I proceed to reset password via email alone.
After I'm in it tells me I've had an account since January 22nd 2022. Super unlikely i created the account this year, so I don't know what's going on over there, but it's not accurate bookkeeping.
This appears to be related. One Github user shared an alert they got today, two days after connecting their Github account to Gitlab. Something about an app added to the account. Their Github has 2fa turned on and a very strong password: