Hacker News new | past | comments | ask | show | jobs | submit login
A hacked Microsoft test account was assigned admin privileges (arstechnica.com)
247 points by taimurkazmi 3 months ago | hide | past | favorite | 52 comments



Reminds me of an ancient Roblox hack I heard about, where they had a non-production staging version of the website that users could sign up for (with accompanied "nothing here is permanent" banner). A new administrator user account was added to production, and someone was able to register the same staging site username and use it's cookies and tokens in order to hijack the production account and compromise the site. I can't imagine these types of problems are that uncommon: if you generate cryptographic tokens based off username or user ID that doesn't have a different secret for production/staging, if your staging site talks to other external services that mix up permission grants for production, etc.


I once implemented delivery APIs for e-commerces, DHL etc... We did deliveries for months with labels printed from a test api (forgot to switch to prod servers), the packages did get delivered but we weren't billed. We came clean as soon as we realized what happened :).


Some years ago, I was working at a small retail store and we built a new site, going live with Stripe in test mode. That one took a few months to notice.


This is why audience fields exist in tokens


As trivial as your statement sounds, thanks for finally explaining something which to me never made sense under most conditions :-)


This attack must rely on secrets being the same in prod and test. They should also solve that problem!


But most auth implementations don’t bother checking it.


The dev/production boundary in large companies is much more porous than people like to think. Consider a typical day: you log in to your pc, check email, then use the same creds to log in to your azure portal (after all, all backed by the same tenant). Your account is linked to github and your cloud account.

Groups and teams get created all over the place, with mysterious permissions, just so that you can play in Teams, or OneDrive, and these persist in the company directory almost indistinguishable from security groups.

Sometimes you get automated emails asking you if you still need something, but the messages are opaque, and in a really big company there's no-one you can really ask (helpdesk takes 2 days to get back to you, and what are you gonna do, hit up John Savill on Twitter?), so you hit ok and try to carry on.

Inevitably, the fabric starts to rend, and an attacker gets lucky at a weak spot and can jump sideways through the tenant to get what they want.

As a wise CISO once said, hackers dont break in, they log in.


I like this post because it boldly assumes what "typical" means.

Of course everyone is using (MS) cloud, skype, twitter one drive and what have you. It even throws some guy's name for good measure.


I mean the help desk thing is legit an issue. Taking shortcuts is always killer, security fails when effort to act in a secure way is high.


> Kevin Beaumont—a researcher and security professional with decades of experience, including a stint working for Microsoft—pointed out on Mastodon that the only way for an account to assign the all-powerful full_access_as_app role to an OAuth app is for the account to have administrator privileges. “Somebody,” he said, “made a pretty big config error in production.”

Without knowing details of the system: That would not seem to be the problem, and I'm surprised someone with expertise would say it is. There should be no way for someone to make that error. Whoever designed it and whoever administers it should have made it impossible; they would be responsible.

If you build and operate a factory with a button that electrocutes everyone inside, and someone mistakenly presses that button, it's clear where the problem is.


I doubt the problem is technical. There probably were 20 best practices and safeguards in place to prevent it technically. The problem is guardrails and best practices only matter if they are cared about by the organization/leadership/bureaucracy. I've been told to give super admin/root rights to VIPs that had no business with such things many times over the years even though it overrode all policy/procedure/regulation/law.

Its also compounded by every job is a

cross-cross-cross-parttimecross-cross-dualrole-trirole job these days.

I've seen role based controls that had more roles than actual permissions that were possible to grant so the entire point of the RBAC was self defeating. It was actually less time to simply grant permissions individually but of course that meant the reports would not show roles so it was not allowed.

That doesn't come from the tech people that comes from poor leadership.

On a side note I once designed an add on to a RBAC permission system to overcome this problem for an in house ERP. We had a permission type called "Permission Exception" If someone needed a permission outside of their role they got it assigned this way so a report could be generated that showed everyone that could do things outside of a job role. I've never seen it anywhere else. All it did was add a flag to the permission but it worked. HR would check the permission exceptions every quarter and see what needed removed from people. It kept the permissions in check by actual knowledgeable authority rather than some part time help desk position trying to cluelessly trial error untangle permissions.


Yes, that's what I was saying, though it could be technical management. Whoever configured it that way may have been put in a position to fail.

> I've been told to give super admin rights to VIPs that had no business with such things many times over the years even though it overrode all

One can establish the power to say 'no', at least in many places. The systems are your work domain (depending on your job title); you can make clear that outsiders are not permitted inside.

An essential is to establish credibility: 'Leave it to me - stay off my turf - and it will get done.' When they say the VIP needs something, find out what they really need and deliver that with amazing promptness and reliability (i.e., no saying later, 'oh, I didn't think of that!'). Everyone will be happy and impressed.

It also gives the impression that what you say is serious - you really mean that these security issues are serious. If you go along with the request, you convey that it's just talk.

Easily said in a comment, much harder to do of course. But remember that if you follow their instructions, and stuff goes wrong, it's your fault - your name is on the systems; the failures and successes are your reputation; and also you were supposed to advise them ...


> An essential is to establish credibility: 'Leave it to me - stay off my turf - and it will get done.' When they say the VIP needs something, find out what they really need and deliver that with amazing promptness and reliability (i.e., no saying later, 'oh, I didn't think of that!'). Everyone will be happy and impressed.

I do agree that this is necessary, but I would add that for more effective policy, there needs to be (semi-)public demonstrations of those with higher titles actively refusing excessive access rights beyond their scope so that everyone can see the commitment to the security practices. We started this in a previous org and it was as simple of statements as when someone offers to give additional access rights to someone who really doesn't need it, the someone declines with the reasons:

- It's a security risk to give out too many permissions

- [The someone] shouldn't have that access anyways, it should always be gated

This made it a lot easier to talk VIPs/whatever out of their temporary obsession with higher privileges as there was just an established social norm that the right thing to do is not to seek more access rights than are absolutely needed. Along with a very strong enforcement of change management on access rights that required very publicly visible documentation for such changes, it just got a lot easier to make the social cost of persons wanting access rights they didn't need too expensive for the VIPs/whatever -- none of them wanted to be the person in the change management report that got flagged for requesting access for frivolous reasons. For a short time we even levied the results of phishing email test emails against such VIPs/whatever as a demonstration that they really do need restricted rights, though this was forced to stop as it was "too embarrassing" for some of the VIPs/whatevers who had a horrible track record with phishing emails.

It's very simple to introduce, but enforcement just takes some time and a few potentially awkward conversations at first to prepare for the first "real fight" over access rights.


I got exhausted just reading this. Unless I'm getting paid big $$$$ to be the Chief Security Officer its not my problem. I just document and move on. I'm not even in InfoSec. I always tag in infosec if there is such a department. Sometimes they stop the foolish behavior sometimes they don't. Its their job.

Perhaps on HN there are lots of people getting that FAANG salary but I'm not one of them. I'm not going to sacrifice my sanity and health for standard industry pay. If they pay average that is what they get. I recommend more people do the same, its part of why everyone is going insane nowadays, popping pills, therapy. Because they are fearfully trying to do the job of 20 people.

>prepare for the first "real fight"

> talk VIPs/whatever out of their temporary obsession

F that to h3ll and back. Not my responsibility unless I'm getting paid. The world would be getting better not worse if more people actually fought for their sanity rather than doing work for free. Paid or having a seat at the table, meaning actually having equal authority to the big whigs making hamfisted decisions that they don't understand. Too many people are in charge that should not be. Let them burn.

> forced to stop as it was "too embarrassing"

Exactly my point.


These are great details. I'll only add that the first part is related to the executive buy-in that's necessary for anything.


How do you suggest they make config errors “impossible”?


Don't make it a configuration option, restrict the option to only very specific conditions, require multiple approvals and verifications, limit the duration of the configuration (e.g., it disables after 7 days unless reset), audit it, etc etc.


This is what is commonly known in the cybersecurity industry as a "whoopsie"


I like how we have all these cool security certifications that can quantifiably protect companies and industries from calculable risk and yet somehow the well reasoned and thoughtful best practice of a $36 book on amazon goes totally ignored...

Its almost like security is some sort of ribbon campaign.


Security is a process and not a product.

Anyone selling you security as a product is scamming you.


Even if I have security certificates then 1000 other employees don’t have them.

Regular people even don’t care about any security they just do their jobs.

There are also so many servers/applications/settings that you don’t have enough security aware employees to review everything.

If you look long enough any company at some point will have something open that should not be. That is what hacker groups do - they constantly look for an opening. They find these because company as it operates has to setup new servers new configurations all the time it is not set once and done.


What book?


Cissp, csslp, issap, issep,heck even a dog eared copy of the security+ would have helped...


Does this book tell you how to check your engine oil, or how to ensure your fleet of 100000 rental cars across 1000 worldwide locations with continually revolving staff, cars that are in/out sold/purchased all have their engine oil checked? Probably the former. This ain’t a textbook issue, it is a complex organisation issue. Not an excuse but unfair to say it is a $40 book issue.


Microsoft has a long track record of incompetence, though, coupled with products that are sheer garbage. I guess there is indeed no book for that.


They really don't, on neither points.



This is why I hate when I go to a new gig and someone just assigns me a bunch of rights, "because it's easier". No, don't do that. Not only are you exposing your company to breaches, you're giving me responsibilities I don't want. I could make a mistake and screw something important up, or something could get hacked and people might think it was me, because I have privileges to do it.


TBF given the amount of rights needed to be managed for each employee for all their accounts accross the services, it feels like the natural progression of things.

Basically, we're experiencing the Windows XP security popup equivalent but at a service level, where at every step of a task you're asked to identify to something else and the back and forth to get the proper credentials with the proper rights can take days.

Support teams calling it fucks and throwing half the accounts and permission bucket at any newcomer is just humane.


It is probably a good idea to seperate production and other environments.

Most of devs work is done in non prod environments so you can throw a lot of dev/test permission at them.

Support should be able to support through app permissions only.

Devops/oncall will need automated escalation requests for emergencies. A single “XP popup” that gets audited.

Elephant in the room is CI. I have no answer. I think CI needs a rethink. CI has a lot of power if hacked.


Yeah, RBAC and ACLs kind of suck at enterprise levels. Organizing roles and privileges hierarchically helps to somewhat manage the complexity, but hierarchies have their own issues.


Missing in this post: how do the authors define "production" if a "non-production" account has admin rights to the prod domain?


Yes, I think this is key. This blog post and the quotes in the article are calling this a "gaffe", but in an organization the size and complexity of Microsoft, I'd say that mistaken permissions assignments are inevitable. So I don't think it's really helpful to focus on the "someone in a 220k person company made a mistake at some point" angle.

However, at most companies there is usually a hard thick line between production and test systems. Granting production access to a test account should be basically impossible, so how that happened should be what the investigation should focus on.


On top of that, when one (person/company) sets up a test account they should (at the creation/activation) set the deletion/deactivation date. So even if the admin is hit by a bus, that damn account will go off on DDMMYYYY and not "live long enough to become the villain".

Also, TERRIBLE/HORRIBLE user access management and review. How do they let a test account be live for "months"? Don't they have some alerts for all test accounts in PROD environment to pester them daily??

(I keep ranting about the need of a strong and capable internal IT audit.. this is an alert that your IT audit SHOULD have set up for their Continuous Audit processes)


Well said. A test account having admin privileges over a non-prod system is not the real problem. Poor isolation between prod and non-prod is.


Might be better to say that most companies think they have that kind of isolation, but pentesting, red teaming and incidents then later proof they don't. I have even seen companies routing prod traffic to test systems, it's not uncommon.

Test pretty much always leads to prod.


This is almost certainly another key trust misconfiguration, just like the last few major breaches at Microsoft. Last time it was leaving expired keys in place along with not enforcing expiry checks in the client. This time it's configuring the same key in both prod and test.

It's grade school stuff that they keep getting wrong.


I got one better, worked at a law firm, managers and partners were given admin access to everything. Default password after reset is "passme" because password was too long to remember. They are supposed to reset their password after login into the server. Hackers got a few of their accounts and started messing with things and stealing data. Even some test accounts had admin status. I'm glad I don't work there anymore, I was a programmer analyst and only had admin access to my PC to make Visual BASIC 6.0 work.


This pattern is the rule rather than the exception across MS ecosystems, but to see Microsoft itself do it is particularly egg-on-face.

MS security has put significant effort into tooling and best practice documentation to try and prevent these types of major screw-ups


Yeah, wondering how the initial test account got compromised. Probably no MFA and password spray via OAuth ROPC flow, then lateral movement.

M365 is quite bad at enforcing MFA, it's pay to play.


Yeah this shit happens all over the place. I’ve worked for/with numerous people whose method of testing almost always involves giving maximum latitude for idk… shotgun debugging? The bigger problem is that people just forget, they’ve made five test accounts with admin privileges and it isn’t known until (hopefully) someone does an audit of user privileges across the entire company.


I once worked for a company that stored all of its passwords for production servers and databases in a text file in the code repository, all because the chief architect didn't want to have to remember any passwords. Pointing out how stupid that was to the CTO, got me a statement "we trust our employees" and "we passed our security audit". My face is still hurting from the facepalm.


I hate db passwords too. They are super fast to crack. They are just annoying and not a security feature. I don't use them, eg passwords like: "c00lz500". I use passwords like "". We use firewalls and internal networks instead.


KGB captured a NSA account?

Access to everything means it was not a planned disclosure.

Also why does MS not design its systems to simply disallow one person to have such priviledges? They are attacked by state actors and have moles inside...


Rather curious now to know how many global admins Microsoft has.


Why is this considered a "gaffe"? It may be the act of a mole who works at Microsoft as an admin.


I sometimes wonder what percentage of people from certain countries are posing as regular MS employees, but are actually working on behalf of their country's security service.

Is it 0%? Or more than 0? How much more?


It is definitely a thing that happens: https://www.nbcnews.com/tech/security/former-twitter-employe...

Hard to say how often / what fraction of occurrences are never caught, or caught but never publicized.


I have a friend who doesn’t work for the CIA (wink wink) he takes normal everyday jobs. But he never worked for the CIA (wink)


Now Russia is probably extorting Microsoft with all the dirt they uncovered. I hope at least they will make the information public after they're done.


Why didn't Microsoft Copilot catch this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: