- It only says "access to code repositories" (it does not say anything about the level of that access, it might as well mean write access, capability to trigger actions etc.)
- Not relying on the confidentiality of source code is great, but malicious CD workflow actions would still be a risk if attackers had that level of access.
- No information about the entry point for compromise.
IAM on GitHub needs so much <3. So broad, much ow.
For example, I trialed major security vendor's enterprise product. They required their app be granted Admin on the GitHub org. All they needed to do was create issues, PRs, and read source code for analysis. There are scopes for that.
I was eventually on a call with a principle engineer in this company, who kept saying they needed this permissions, and I kept showing him the API docs that showed that wasn't so. Eventually he said, "well, we won't _use_ all those permissions, so just give them to us anyway, because it's easier this way." Sure, I'll give you the ability to change all my code, add/remove users, drop repos... etc, and trust that some day, when you're hacked, someone will not use those over granted permissions maliciously?
Security is hard. Be careful what permissions you give your 3rd party GitHub integrations.
>>"well, we won't _use_ all those permissions, so just give them to us anyway, because it's easier this way."
Devs have been doing that since the dawn on computing. Ohh your App needs to be able to write to a protected folder on windows. Dont document what folder just force the app the run as Admin.
Early Android Apps asked for all the permissions, all the time because of lazy devs
security is hard, and gets in the way of what the devs wnat to do so they just find ways to bypass it
> I trialed major security vendor's enterprise product
> just give them to us anyway, because it's easier this way
Wow. The state of security is still sad in our profession, if even major security vendor(s) don't adhere to basic principles like "principle of least privilege".
Heh. Reminds of one of Symantec's "Enterprise" products.
Turned out, if you're logged into the central (on prem) server it has the ability to run commands as root/superuser on any of the connected clients (generally servers themselves).
The commands run this way are _not logged_ and don't show up in any system audit logging.
After we pointed this out as a security problem in itself, they released a new version that _apparently_ had this functionality removed (was in the release notes).
But digging into the new release, they'd just moved the functionality into different binaries and hoped no-one would notice. :(
The mind boggles at what some of these places will try.
My experience with security vendors is that there's a lot security vendors who check checklist compliance solutions that on paper helps to be compliant but in reality are extremely insecure malwares.
You need to choose from a drop-down that the token owner is the organization, not you. Then you can create a token for a repo of the org. Your org must opt-in to beta.
Definitely - also its pretty easy to lose track of all things. Started a tool to audit github apps and misc permissions for an org which is currently basic atm but hopefully in the future more checks will be added - 3rd party integrations and apps are up there: https://github.com/crashappsec/github-analyzer. Any issues or feature requests are welcome and hopefully will expand it soon!
Ugh. Doesn't raise the trust in their competence of protecting admin access credentials to GitHub. The same mindset leads to "We use just one shared ssh cert, because it is easier. And our VPN solution is a 2nd factor in any case".
> - It only says "access to code repositories" (it does not say anything about the level of that access, it might as well mean write access, capability to trigger actions etc.)
Given the exceptionally broad granularity of github permissions with oauth at least I'd be concerned. I've repeatedly had to avoid using something because while I wanted to grant permission to, say, read one particular repo I'd have to allow write access to all private repos. Githubs solution is to give broad access to a user that has limited access, although then they shout at you for creating more users if you're not a paying member yourself (but may be in an organisation).
Unfortunately this only seems to be available for repos you own yourself and not if an org owns the repo, making it useless in a company context until that is expanded. Great for personal stuff tho.
They work, but the organization needs to approve the token (and its scope). As an org admin, I prefer it this way because I can audit what access developers give out to what repositories.
The new tokens are still in Beta, so there are some other limitations: for example, GitHub Packages do not support them yet, so you cannot use them in NPM/yarn to get your private packages hosted on GitHub.
Doesn't this rather defeat the point? If using a PAT is simpler than using the org token then I'll just use a PAT that has access to every single repository in the org.
GitHub Org admins have the option to block all regular PATs, and only allow fine-grained tokens that they pre-approve. This block is “off” by default, but I expect the best practice soon will be to enforce this rule.
Oh, aside from the pre-approval, that would be really nice. I need stuff scoped to the org that is not necessarily connected to my personal account (and all it’s repositories).
overbroad roles + permissions is such a problem on every platform. github surprisingly bad given their technical sophistication overall
on my long-term wishlist: a standard way to aspirationally reduce your permissions footprint
like if I'm a twitter plugin, and I need one operation for which there is only a broad permission, I request the broad permission ('read and write everything'), but include an optional manifest making it clear that I only need the one action ('add followers')
this 1) enables a linter ecosystem to verify my footprint is narrower than it seems, and 2) over time lets the platform tailor offer narrower perms based on observed use-cases
permissions matter bc users can audit them more easily than code
Since GitHub alerted Okta, I'm assuming they use the regular, hosted github.com. I'm kinda shocked a security company doesn't have a private GitHub Enterprise server behind a firewall.
Hosting something yourself does not make it magically more secure. Even if you hire a small team of really smart people, they'd have to work pretty hard to do as good of a job as the many 100s working on security at GitHub...
A good reason to give engineers PGP keys and turn on the "required code signing" feature on your org. Alas, security and productivity are perpetual odds.
Just a friendly reminder that both GH and GL now support using SSH keys for signing commits, and 1Password (and KeePassXC, FWIW) will safely store those SSH creds off-disk:
Note that your GPG key is discarded, and GitHub signs your commit itself with GitHub.com's own GPG key when anyone uses the GitHub UI to merge your PR.
All those "verified" buttons you see on a typical repo history tend to actually be for the GitHub.com signing key, which is shared by everyone. Your GPG signature is only used to convince GitHub to sign the final commit with its key.
It is possible to put your GPG signature on the merged commits, so that people can trust the commits came from you. That may be especially appropriate for security software. But you have to do the merges (or rebases as you prefer) outside GitHub for that, and push those merges directly to the main branch. That's what I do when I can, but it's not common practice. Many orgs require all merges to be done via GitHub, so end up with GitHub.com's shared signature on everything instead of their own.
Just wondering - would this meaningfully impact productivity beyond causing engineers to have to learn how to sign a commit (which would presumable take less than an hour, once)?
Actually generating a key and signing commits is pretty easy. I think the harder part would be ensuring all devs safely store the keys, rotate them regularly, etc.
> There are always creds in private/company repos! Lots.
I disagree with the "always" in this statement. Sloppy, lazy private repos sure. It is possible to have them completely absent in any and all repos though I've only seen and been part of such an effort once, it takes a lot of work to make sure it happens and I have little faith in most companies following through with that.
In my experience it's a pretty low bar to keep private credentials outside of source code. If a "security" company like Okta has secrets in their source code, that's embarrassing and unexpected. Any competent team of 2+ developers should be able to avoid this.
However, what's more common is secrets in CI variables. If their GitHub was breached, they should be more concerned with whether the attackers had access to GitHub Actions logs or secrets.
In this day and age, I really do not understand why one of the first steps when spinning up a new repo (for this type of app) is not leveraging a tool like dotenv, and then a config system that uses environment variables for things like db credentials/etc. Yes it takes another hour of time to get that all going but, in the long run, you'll thank yourself!
I've worked in places where all the code was open to all teams, with the exception of DevOps because they had too many hardcoded secrets and never had time to clean up their mess. I get it, they really were spread thin... but it should have never happened in the first place.
I know it's probably sarcasm, but just in case: I don't think they should, otherwise they could find themselves in a situation where an outage prevents them from fixing the outage.
Similar to the Facebook outage earlier. Apparently everything is managed through Facebook accounts so employees could not get access to fix the issues.
Maybe services like Okta (at least the core part without UI, just like macOS & Darwin) should be open source by default? That would be a good policy to forcibly open source anything connected to encryption/authorization/authentication or PKI.
Whoever solen this code can sell it to criminals/evil countries/black hats around the world, so on Okta's place I would start scanning the code with all available static checkers and fix as much and as fast as I could.
Okta become quite popular among startups, I used it at my prev work and I use it at my current job. I even considering investing into its stock. My only complain is that it is somehow slow, otherwise UI is acceptable and Chrome plugin is OKish.
They are to auth what Zoom is to webconfs: Just Bad But Good At Marketing. In the winner-takes-all environment of the web economy, they must be making pretty good money.
To be fair, Zoom is the only modern conferencing software that actually has a native client instead of Electron garbage.
Back in the day many services had beautiful native clients (MSN/Windows Live, Skype pre-Microsoft, etc) that successfully delivered videoconferencing on single-core CPUs and sub-gigabyte RAM capacities but clearly it was worth sacrificing all that just to give Javascript developers a career.
Nowadays few companies besides Zoom seem to be able to replicate this experience despite processing power being more than double (not that Zoom is perfect, but it's the least bad).
You can do that in Teams client for macOS at least. Teams is even smart enough to pick up new audio devices mid-meeting, if, say, you connect a new pair of headphones.
Didn't work the last time I had to have a meeting with my kid's teacher on Teams.
Had to literally quit the meeting to pick a new webcam, Teams wanted to pick the shitty laptop one instead of my dedicated camera on top of the main monitor. It also wanted to use the webcam's microphone and display's speakers instead of my actual speakers and proper microphone.
Absolutely. Zoom is the first video conference software I have used where the first 15 minutes of the meeting is not spent on impromptu tech support to get everyone connected, seeing, and hearing everyone else. And I'm exaggerating only slightly.
They did that by breaking user privacy and security on an unprecedented level. That's why I said they are Just Bad - their practices are terrible. But they were good at selling (i.e. deflecting when challenged on their bad behaviour), and that worked. They are "better" than alternatives in the same way Oracle is "better" than other database vendors.
I haven't used lots of videoconference tools, but Zoom has a killer feature I haven't seen elsewhere:
The ability to put camera feeds on one screen and screenshare on another.
If you're using a laptop's built-in webcam, and you have external monitors, that makes the "video meeting with a screenshare" experience noticeably better.
I don't know how I feel about this. On the one hand Okta's position in the market makes them a high value target to hack. Between the Lapsus$ hack in April and this, Okta has had a rough year. At the same time I feel like they should know better, and I hated the way they handled the Lapsus$ breach, trying to hand wave the potential damage away
It's pretty clear how to feel about this: unsurprised.
Their internal infrastructure was pwned by some 16 year old chavs who broke into support software with unjustifiable "superuser" features accessible by any employee. And Okta never intended to reveal that to us until the chavs themselves made it known.
This is a company that "literally" has one job - to protect the security of user accounts and facilitate login. Their product is simple and its development velocity is extremely slow. As a developer who has integrated Okta into multiple products, I've never gotten the feeling it's being run by a competent crew of software engineers, nevermind security professionals.
It's entirely unsurprising to me that they would be compromised at this level. Not only are they a juicy target, but they've proven themselves generally incompetent at performing their basic job.
The only reason we picked okra is because a lot of our downstream vendors had Okta integration.
I’m starting to wonder if there’s some kind of incentive given out or whether it’s one of those “let’s support okta cuz everyone uses it.” And then SaaS customers also say “let’s use okta cuz all our SaaS products keep mentioning it”?
You're assuming that the breach was done through the UI and not for example an oauth token or ssh key that was stolen from a developer's machine and used to download the source code by the attacker.
Another comment mentions GitHub themselves detecting the breach - in this case it's unlikely to be done via a compromised developer's laptop as the access would otherwise look normal and wouldn't trigger GH's security alerts.
Meh... I do this every 6-8 months as a principal engineer. I've had many legit use cases.: Understanding our overall dependency tree, validating code coverage assumptions, seeing which projects built still, testing out prototype profiler reports, inspecting the code to see how hard adding x pattern would be, quantifying code change patterns over the pandemic, seeing which uses of the AWS sdk or internal clients were instrumented with metrics, seeing what pct would build under make/go build/bazel/etc.
Anyway many legit reasons. Should it set off an alarm? Probably. Can you say before you do it? For sure!
Definetly not true. This can be the best password in certain conditions.
You should not put your "good" passwords to any shady site out there. You have no idea how passwords are stored on all these platforms one is registering to.
If you can live with the fact, that an account may be hacked, then go for a super easy password if you want.
Excusing the fact it was a joke, on the serious side, a person shouldn't have any kind of set of "good" passwords. They should just have secure passwords they have auto-generated and have some way of retrieving the password from where they are stored when needed.
Or, just have a unique password for every site stored in a password manager, and then they can all be “good” passwords, with no big concerns about how they are stored!
"Our customers' passwords remain safely encrypted due to LastPass’s Zero Knowledge architecture."
I take your point but I'm not aware of any hack of a major provider which resulted in exfiltration of decrypted customer secrets. Providers often enumerate how they prevent exactly this scenario [1][2], but you'd be correct that if your endpoint were compromised, it's probably game over. To be fair in this scenario just typing in your password (not using a manager) would also be game over.
If you want other options, it is possible to self-host (i.e. Vaultwarden). Personally I've been using 1Password for a long time, and their "Families" offering [3] is exceptional for me and has meaningfully improved my family security since the UX is easy enough my loved ones don't find a unique password per site "a chore".
IMO, it's a bit weird they would even host their source code on open-facing GitHub. Of course I don't know the whole story, but seems like a company dealing with security would benefit from hosting their on VCS, perhaps GitHub enterprise, behind a VPN..?
Nothing is going wrong as long as adequately dissuasive penalties aren't being levied against companies with terrible security practices. It's business as usual.
Okta has been breached previously, with actual access to end-customer data, and so did Equifax, who leaked the credit reports of ~140M people. Both companies are still alive and thriving.
The bigger concern to me is not Okta but rather the companies it acquired. One of them specifically integrates directly with Okta to manage agents running on production servers as root. This would be an incredibly valuable target, much more so than access to some web front-ends. In some cases the damage done through a web front end can be rolled back assuming an audit trail. Damage done to large numbers of servers in many companies as root would be quite challenging and time consuming to remediate and that is assuming tested backups are in place that root can not tamper with. I have failed to convince companies to have backups that root can not tamper with.
If I were Okta that is the first thing I would be locking down ASAP.
Everyone uses those words differently and no two groups agree. Event is often used since it’s the most generic. Also, many companies have requirements around their use of certain words and this might not meet their lawyers definition of incident.
Did they decide that that was their strategy before or after this breach occurred?
They were trying to keep their source code secure, and they failed. Not a great look for a company that ask you to trust them with all of your user accounts.
That's not a bad point. GitHub is convenient, yes, but if you manage security for companies that are collectively worth billions, you want to minimize your attack surface.
In my experience, security is usually an afterthought in most tech companies. Just look at how many MongoDB instances in production were running on open ports with the default password, which was... no password.
It always amazes me how many companies host their source code in “the cloud” visible to external companies, accessible from the public internet, only a single bug/security vulnerability/human error away from being leaked. The companies I’ve worked for so far host their source code on their own servers, which are physically under their control, and aren’t accessible from the public internet. Add on top of it that functionally GitHub/GitLab don’t hold a candle to code review functionality of Gerrit, project management of Redmine/Jira; and the only explanation is that those companies really, really don’t want to hire 1/2 sysadmins to manage their infrastructure.
Companies host data a lot more valuable than source code in the cloud. Developers like to think we’re the center of the business, but access to a company’s Salesforce or NetSuite instance is a lot more economically valuable than access to a company’s GitHub org.
Companies with hundreds of sysadmins and billions of revenue or moving to the cloud... It has nothing to do with the cost of a handful of additional engineers.
Solving security issues and maintaining secure systems isn't your problem, but adhering to company policy most certainly is, and the policy almost certainly forbids you from sharing passwords (especially via Slack.)
Unless you're in a tiny or ass-backward company that doesn't even have a policy that covers such things.
If Okta admits to being breached or having something stolen from them, I wonder how many months ago was that. Guess we'll have to wait for Cloudflare post to get reliable info.
They are hacked, but you will never know or be personally affected. This is why banks are strictly regulated and have insurance for everything and agreements with whoever manage their IT and security.
Oh yes you absolutely can. C-level execs routinely get D&O insurance that covers for everything financial... the only thing that a C-level exec can't get out of is jail time, but it's rare that courts hand out jail time.
GDPR is only applies when personal data is involved. Banks usually have all client data well isolated from everyone including most of IT staff so credit card data being stolen isn't always mean that personal data is affected.
My point is that the “obligatory” link is getting annoying when tallying up the total amount of fines over the last 4 years across all companies is still less than the yearly profit of the single biggest offender.
And yes, the news you mention are welcome but I’m not holding my breath - they’ve been allowed to brazenly breach the regulation for 4 years and one of the regulators even ended up being corrupt and collaborated with them behind the scenes.
They get hacked and robbed regularly. But they’ll never disclose to prevent a run on the bank.
There are just a lot of “service interruptions“, like online banking not working for a few hours
Don't most western countries have disclosure laws now? This doesn't sound right, not in Australia at least, it would be completely illegal to cover such a thing up and not notify the government cyber agencies.
If you google for "us bank data breach" you'll find plenty of examples of this happening and being disclosed. However banking regulations are strict and banks own systems are designed to be resistant to insider attacks. So everything usually designed to keep different kind of data within bank own systems isolated and audited.
In most of small and middle sized IT companies there is rarely any security measures against insider attacks or just accidental leaks if someone been too incompetent when it come to security. So leaks of sensetive data happen far more often since plenty of SWE and DevOps can easily get access to all the production databases.
As result deployment of some single line fix in code of some critical system in a major bank might easily take 3-6 months.
In EU you have to inform proper officials and everyone who were impacted. Failure to do so may result in fine up to 10 mln euro or 2% global revenue. There's some reports - not that many. Honestly I would say that the banks are not very attractive target - it's not easy to extract reasonable amount of money that way (better to directly scam end user), and security is usually at least a bit above average, so for personal data it's easier to hack some e-commerce store.
Even if the hacker was able to start an Okta competitor with their source code, how many companies would actually buy it? Who would maintain it? Or am I missing something here, what would they use the source code for?
Exactly. Why would I start a competitor when I could easily sell zero days to dictatorial regimes and make millions or possible billions? Or utilize them myself?
First of all why do you think that everything have to be done to profit off it? A lot of security research including blackhat one is done just for fun and might be it was some kind of white hat attempt gone wrong and was never reported to them as result.
Also it very much possible that whoever "hacked" their GitHub account just stumbled into some important API token accidentally left in public by someone of their employees.
And even if there is malicious intent there are more ways to use it. E.g Okta is public company and whoever have access to it confidencial data can manipulate their stock.
- Github alerted Okta about the access, they were not able to detect this themselves (https://docs.github.com/en/organizations/keeping-your-organi...)
- It only says "access to code repositories" (it does not say anything about the level of that access, it might as well mean write access, capability to trigger actions etc.)
- Not relying on the confidentiality of source code is great, but malicious CD workflow actions would still be a risk if attackers had that level of access.
- No information about the entry point for compromise.
I doubt their 'commitment to transparency'.