- Github alerted Okta about the access, they were not able to detect this themselves (https://docs.github.com/en/organizations/keeping-your-organi...)
- It only says "access to code repositories" (it does not say anything about the level of that access, it might as well mean write access, capability to trigger actions etc.)
- Not relying on the confidentiality of source code is great, but malicious CD workflow actions would still be a risk if attackers had that level of access.
- No information about the entry point for compromise.
I doubt their 'commitment to transparency'.
For example, I trialed major security vendor's enterprise product. They required their app be granted Admin on the GitHub org. All they needed to do was create issues, PRs, and read source code for analysis. There are scopes for that.
I was eventually on a call with a principle engineer in this company, who kept saying they needed this permissions, and I kept showing him the API docs that showed that wasn't so. Eventually he said, "well, we won't _use_ all those permissions, so just give them to us anyway, because it's easier this way." Sure, I'll give you the ability to change all my code, add/remove users, drop repos... etc, and trust that some day, when you're hacked, someone will not use those over granted permissions maliciously?
Security is hard. Be careful what permissions you give your 3rd party GitHub integrations.
Devs have been doing that since the dawn on computing. Ohh your App needs to be able to write to a protected folder on windows. Dont document what folder just force the app the run as Admin.
Early Android Apps asked for all the permissions, all the time because of lazy devs
security is hard, and gets in the way of what the devs wnat to do so they just find ways to bypass it
> just give them to us anyway, because it's easier this way
Wow. The state of security is still sad in our profession, if even major security vendor(s) don't adhere to basic principles like "principle of least privilege".
Turned out, if you're logged into the central (on prem) server it has the ability to run commands as root/superuser on any of the connected clients (generally servers themselves).
The commands run this way are _not logged_ and don't show up in any system audit logging.
After we pointed this out as a security problem in itself, they released a new version that _apparently_ had this functionality removed (was in the release notes).
But digging into the new release, they'd just moved the functionality into different binaries and hoped no-one would notice. :(
The mind boggles at what some of these places will try.
You can't even set up popular software like Tailscale with a github login without it requiring access to your organization's private repositories.
It's like mobile phone permissions in the old days where your calculator needs access to your contacts and location.
I thought technology companies learned this lesson a decade ago, apparently not.
Given the exceptionally broad granularity of github permissions with oauth at least I'd be concerned. I've repeatedly had to avoid using something because while I wanted to grant permission to, say, read one particular repo I'd have to allow write access to all private repos. Githubs solution is to give broad access to a user that has limited access, although then they shout at you for creating more users if you're not a paying member yourself (but may be in an organisation).
The new tokens are still in Beta, so there are some other limitations: for example, GitHub Packages do not support them yet, so you cannot use them in NPM/yarn to get your private packages hosted on GitHub.
on my long-term wishlist: a standard way to aspirationally reduce your permissions footprint
like if I'm a twitter plugin, and I need one operation for which there is only a broad permission, I request the broad permission ('read and write everything'), but include an optional manifest making it clear that I only need the one action ('add followers')
this 1) enables a linter ecosystem to verify my footprint is narrower than it seems, and 2) over time lets the platform tailor offer narrower perms based on observed use-cases
permissions matter bc users can audit them more easily than code
Many unanswered questions.
Although in full transparency, I still use GPG for my needs, since I better understand its workflow
All those "verified" buttons you see on a typical repo history tend to actually be for the GitHub.com signing key, which is shared by everyone. Your GPG signature is only used to convince GitHub to sign the final commit with its key.
It is possible to put your GPG signature on the merged commits, so that people can trust the commits came from you. That may be especially appropriate for security software. But you have to do the merges (or rebases as you prefer) outside GitHub for that, and push those merges directly to the main branch. That's what I do when I can, but it's not common practice. Many orgs require all merges to be done via GitHub, so end up with GitHub.com's shared signature on everything instead of their own.
For a company of Okta's scale and importance, this should be part of SIEM.
There are always creds in private/company repos! Lots.
I disagree with the "always" in this statement. Sloppy, lazy private repos sure. It is possible to have them completely absent in any and all repos though I've only seen and been part of such an effort once, it takes a lot of work to make sure it happens and I have little faith in most companies following through with that.
However, what's more common is secrets in CI variables. If their GitHub was breached, they should be more concerned with whether the attackers had access to GitHub Actions logs or secrets.
I've worked in places where all the code was open to all teams, with the exception of DevOps because they had too many hardcoded secrets and never had time to clean up their mess. I get it, they really were spread thin... but it should have never happened in the first place.
Whoever solen this code can sell it to criminals/evil countries/black hats around the world, so on Okta's place I would start scanning the code with all available static checkers and fix as much and as fast as I could.
Okta become quite popular among startups, I used it at my prev work and I use it at my current job. I even considering investing into its stock. My only complain is that it is somehow slow, otherwise UI is acceptable and Chrome plugin is OKish.
Okta is a terrible, expensive, and potentially incompetent (this is not their first breach) hosted knockoff of the above.
Nowadays few companies besides Zoom seem to be able to replicate this experience despite processing power being more than double (not that Zoom is perfect, but it's the least bad).
I can test the mic and video before joining. I can also change my mic and video source after joining.
You'd think that both are 100% basic features, but nope. Can't do either in Google Meet or Teams...
Had to literally quit the meeting to pick a new webcam, Teams wanted to pick the shitty laptop one instead of my dedicated camera on top of the main monitor. It also wanted to use the webcam's microphone and display's speakers instead of my actual speakers and proper microphone.
The ability to put camera feeds on one screen and screenshare on another.
If you're using a laptop's built-in webcam, and you have external monitors, that makes the "video meeting with a screenshare" experience noticeably better.
Their internal infrastructure was pwned by some 16 year old chavs who broke into support software with unjustifiable "superuser" features accessible by any employee. And Okta never intended to reveal that to us until the chavs themselves made it known.
This is a company that "literally" has one job - to protect the security of user accounts and facilitate login. Their product is simple and its development velocity is extremely slow. As a developer who has integrated Okta into multiple products, I've never gotten the feeling it's being run by a competent crew of software engineers, nevermind security professionals.
It's entirely unsurprising to me that they would be compromised at this level. Not only are they a juicy target, but they've proven themselves generally incompetent at performing their basic job.
I’m starting to wonder if there’s some kind of incentive given out or whether it’s one of those “let’s support okta cuz everyone uses it.” And then SaaS customers also say “let’s use okta cuz all our SaaS products keep mentioning it”?
Curious to know if Okta themselves were using best practices such as only using a hardware token as 2FA (FIDO2) and disabling SMS 2FA and recovery.
Anyway many legit reasons. Should it set off an alarm? Probably. Can you say before you do it? For sure!
The previous comments were referring to the famous and really hilarious Mel Brooks movie "Spaceballs", this scene in particular:
I take your point but I'm not aware of any hack of a major provider which resulted in exfiltration of decrypted customer secrets. Providers often enumerate how they prevent exactly this scenario , but you'd be correct that if your endpoint were compromised, it's probably game over. To be fair in this scenario just typing in your password (not using a manager) would also be game over.
If you want other options, it is possible to self-host (i.e. Vaultwarden). Personally I've been using 1Password for a long time, and their "Families" offering  is exceptional for me and has meaningfully improved my family security since the UX is easy enough my loved ones don't find a unique password per site "a chore".
IMHO the network is no longer the correct security boundary, and I hope I never encounter a VPN in my career ever again
Okta has been breached previously, with actual access to end-customer data, and so did Equifax, who leaked the credit reports of ~140M people. Both companies are still alive and thriving.
2022: hacker group leaked customer information stolen from third part customer support engineer.
2022: this one
So not the worst breaches, but also definitely what you want happening to your identity provider.
If I were Okta that is the first thing I would be locking down ASAP.
Also shouldn't this be called a security incident rather than event.
> Okta shortly acknowledged that the hack being referred to had in fact occurred late January 2022
In addition, a secure system is a moving target and it is difficult to find the right balance.
Breaches like this are important for us to refocus. Thanks for your service, Okta!
Okta has not learned their lesson with security with this hack, especially when they got themselves breached before in January.
> Okta does not rely on the confidentiality of its source code as a means to secure its services.
Would it be more professional for them to host their code in a public repo?
They were trying to keep their source code secure, and they failed. Not a great look for a company that ask you to trust them with all of your user accounts.
In my experience, security is usually an afterthought in most tech companies. Just look at how many MongoDB instances in production were running on open ports with the default password, which was... no password.
Frankly, as an employee, security is not my problem.
Unless you're in a tiny or ass-backward company that doesn't even have a policy that covers such things.
The accountability to pick okta and check if they are, let's say within gdpr is on you!
maybe. GDPR has a notification of breach requirement. we shall see.
With regards to Facebook and Google, there are recent news on that front:
And yes, the news you mention are welcome but I’m not holding my breath - they’ve been allowed to brazenly breach the regulation for 4 years and one of the regulators even ended up being corrupt and collaborated with them behind the scenes.
In most of small and middle sized IT companies there is rarely any security measures against insider attacks or just accidental leaks if someone been too incompetent when it come to security. So leaks of sensetive data happen far more often since plenty of SWE and DevOps can easily get access to all the production databases.
As result deployment of some single line fix in code of some critical system in a major bank might easily take 3-6 months.
Do disclosure laws say anything about end users? Even if said users aren’t affected?
Also it very much possible that whoever "hacked" their GitHub account just stumbled into some important API token accidentally left in public by someone of their employees.
And even if there is malicious intent there are more ways to use it. E.g Okta is public company and whoever have access to it confidencial data can manipulate their stock.