Hacker News new | past | comments | ask | show | jobs | submit login
Okta says its GitHub account hacked, source code stolen (bleepingcomputer.com)
448 points by nazgulsenpai on Dec 21, 2022 | hide | past | favorite | 169 comments



Red flags for me:

- Github alerted Okta about the access, they were not able to detect this themselves (https://docs.github.com/en/organizations/keeping-your-organi...)

- It only says "access to code repositories" (it does not say anything about the level of that access, it might as well mean write access, capability to trigger actions etc.)

- Not relying on the confidentiality of source code is great, but malicious CD workflow actions would still be a risk if attackers had that level of access.

- No information about the entry point for compromise.

I doubt their 'commitment to transparency'.


IAM on GitHub needs so much <3. So broad, much ow.

For example, I trialed major security vendor's enterprise product. They required their app be granted Admin on the GitHub org. All they needed to do was create issues, PRs, and read source code for analysis. There are scopes for that.

I was eventually on a call with a principle engineer in this company, who kept saying they needed this permissions, and I kept showing him the API docs that showed that wasn't so. Eventually he said, "well, we won't _use_ all those permissions, so just give them to us anyway, because it's easier this way." Sure, I'll give you the ability to change all my code, add/remove users, drop repos... etc, and trust that some day, when you're hacked, someone will not use those over granted permissions maliciously?

Security is hard. Be careful what permissions you give your 3rd party GitHub integrations.


>>"well, we won't _use_ all those permissions, so just give them to us anyway, because it's easier this way."

Devs have been doing that since the dawn on computing. Ohh your App needs to be able to write to a protected folder on windows. Dont document what folder just force the app the run as Admin.

Early Android Apps asked for all the permissions, all the time because of lazy devs

security is hard, and gets in the way of what the devs wnat to do so they just find ways to bypass it


Early Android had pretty coarse permissions IIRC. It wasn't quite "root or nothing" but somewhat closer to that than it is today.


"because of lazy devs" :thinking:


Definitely not because of the pesky product managers and sales teams who want a new feature to sell yesterday to boost their EOY bonus...


Ask them to sign a document accepting all liability in those situations. I think the conversation will quickly change..


"Sorry, we don't accept redlines or riders for accounts that are less than $750k ACV."


> I trialed major security vendor's enterprise product

> just give them to us anyway, because it's easier this way

Wow. The state of security is still sad in our profession, if even major security vendor(s) don't adhere to basic principles like "principle of least privilege".


Heh. Reminds of one of Symantec's "Enterprise" products.

Turned out, if you're logged into the central (on prem) server it has the ability to run commands as root/superuser on any of the connected clients (generally servers themselves).

The commands run this way are _not logged_ and don't show up in any system audit logging.

After we pointed this out as a security problem in itself, they released a new version that _apparently_ had this functionality removed (was in the release notes).

But digging into the new release, they'd just moved the functionality into different binaries and hoped no-one would notice. :(

The mind boggles at what some of these places will try.


"Required functionality..." They're just not telling you who the requirements come from.


It's not just security vendors, it's everyone.

You can't even set up popular software like Tailscale with a github login without it requiring access to your organization's private repositories.

It's like mobile phone permissions in the old days where your calculator needs access to your contacts and location.

I thought technology companies learned this lesson a decade ago, apparently not.


My experience with security vendors is that there's a lot security vendors who check checklist compliance solutions that on paper helps to be compliant but in reality are extremely insecure malwares.


It was only recently that PATs got the ability to be scoped per repo, and even that's still in beta.


But frustratingly fine grained scoping doesn't work for repos part of an Organisation! Like, what!


You need to choose from a drop-down that the token owner is the organization, not you. Then you can create a token for a repo of the org. Your org must opt-in to beta.


Definitely - also its pretty easy to lose track of all things. Started a tool to audit github apps and misc permissions for an org which is currently basic atm but hopefully in the future more checks will be added - 3rd party integrations and apps are up there: https://github.com/crashappsec/github-analyzer. Any issues or feature requests are welcome and hopefully will expand it soon!


I had exactly this same experience with Vercel, and we backed out of using them for our major open source repo as a result.


Ugh. Doesn't raise the trust in their competence of protecting admin access credentials to GitHub. The same mindset leads to "We use just one shared ssh cert, because it is easier. And our VPN solution is a 2nd factor in any case".


Name and shame.


> - It only says "access to code repositories" (it does not say anything about the level of that access, it might as well mean write access, capability to trigger actions etc.)

Given the exceptionally broad granularity of github permissions with oauth at least I'd be concerned. I've repeatedly had to avoid using something because while I wanted to grant permission to, say, read one particular repo I'd have to allow write access to all private repos. Githubs solution is to give broad access to a user that has limited access, although then they shout at you for creating more users if you're not a paying member yourself (but may be in an organisation).

https://docs.github.com/en/developers/apps/building-oauth-ap...


GitHub recently introduced fine-grained tokens that can be scoped to a single repository that might work for your use-case https://github.blog/2022-10-18-introducing-fine-grained-pers...


Unfortunately this only seems to be available for repos you own yourself and not if an org owns the repo, making it useless in a company context until that is expanded. Great for personal stuff tho.


They work, but the organization needs to approve the token (and its scope). As an org admin, I prefer it this way because I can audit what access developers give out to what repositories.

The new tokens are still in Beta, so there are some other limitations: for example, GitHub Packages do not support them yet, so you cannot use them in NPM/yarn to get your private packages hosted on GitHub.


Doesn't this rather defeat the point? If using a PAT is simpler than using the org token then I'll just use a PAT that has access to every single repository in the org.


GitHub Org admins have the option to block all regular PATs, and only allow fine-grained tokens that they pre-approve. This block is “off” by default, but I expect the best practice soon will be to enforce this rule.


Oh, aside from the pre-approval, that would be really nice. I need stuff scoped to the org that is not necessarily connected to my personal account (and all it’s repositories).


Fine-grained access tokens are available on org-owned repos too, but the org has to opt in (for some reason).


I'm excited for FGPATs, but they're still in beta and still have a lot of shortcomings at this time.


GitLab has had this feature for years.


overbroad roles + permissions is such a problem on every platform. github surprisingly bad given their technical sophistication overall

on my long-term wishlist: a standard way to aspirationally reduce your permissions footprint

like if I'm a twitter plugin, and I need one operation for which there is only a broad permission, I request the broad permission ('read and write everything'), but include an optional manifest making it clear that I only need the one action ('add followers')

this 1) enables a linter ecosystem to verify my footprint is narrower than it seems, and 2) over time lets the platform tailor offer narrower perms based on observed use-cases

permissions matter bc users can audit them more easily than code


Since GitHub alerted Okta, I'm assuming they use the regular, hosted github.com. I'm kinda shocked a security company doesn't have a private GitHub Enterprise server behind a firewall.


if they had, it'd be worse - they'd probably never know they were hacked...


Hosting something yourself does not make it magically more secure. Even if you hire a small team of really smart people, they'd have to work pretty hard to do as good of a job as the many 100s working on security at GitHub...


One would assume that their code is never accessible over the public Internet. It is pretty much over for Okta now.


So you expect "critical" companies to self-host everything?


Add to it that they reviewed "all recent commits to Okta software repositories." Due diligence or indicative of the threat actor having write access?

Many unanswered questions.


A good reason to give engineers PGP keys and turn on the "required code signing" feature on your org. Alas, security and productivity are perpetual odds.


Just a friendly reminder that both GH and GL now support using SSH keys for signing commits, and 1Password (and KeePassXC, FWIW) will safely store those SSH creds off-disk:

https://docs.github.com/en/authentication/managing-commit-si...

https://docs.gitlab.com/ee/user/project/repository/ssh_signe...

https://developer.1password.com/docs/ssh/agent/

https://keepassxc.org/docs/KeePassXC_UserGuide.html#_ssh_age...

Although in full transparency, I still use GPG for my needs, since I better understand its workflow


Note that your GPG key is discarded, and GitHub signs your commit itself with GitHub.com's own GPG key when anyone uses the GitHub UI to merge your PR.

All those "verified" buttons you see on a typical repo history tend to actually be for the GitHub.com signing key, which is shared by everyone. Your GPG signature is only used to convince GitHub to sign the final commit with its key.

It is possible to put your GPG signature on the merged commits, so that people can trust the commits came from you. That may be especially appropriate for security software. But you have to do the merges (or rebases as you prefer) outside GitHub for that, and push those merges directly to the main branch. That's what I do when I can, but it's not common practice. Many orgs require all merges to be done via GitHub, so end up with GitHub.com's shared signature on everything instead of their own.


Just wondering - would this meaningfully impact productivity beyond causing engineers to have to learn how to sign a commit (which would presumable take less than an hour, once)?


Actually generating a key and signing commits is pretty easy. I think the harder part would be ensuring all devs safely store the keys, rotate them regularly, etc.


So genuine question, how do I detect access to my github repository before github alerts me about something nefarious ?


> You can stream audit and Git events data from GitHub to an external data management system.

For a company of Okta's scale and importance, this should be part of SIEM.

- https://docs.github.com/en/enterprise-cloud@latest/admin/mon...


GitHub's Audit logging is quite lacking, it doesn't include API requests.


Another red flag is that they should mention what they will do with clear-text credentials that are in the source code that was stolen.

There are always creds in private/company repos! Lots.


> There are always creds in private/company repos! Lots.

I disagree with the "always" in this statement. Sloppy, lazy private repos sure. It is possible to have them completely absent in any and all repos though I've only seen and been part of such an effort once, it takes a lot of work to make sure it happens and I have little faith in most companies following through with that.


In my experience it's a pretty low bar to keep private credentials outside of source code. If a "security" company like Okta has secrets in their source code, that's embarrassing and unexpected. Any competent team of 2+ developers should be able to avoid this.

However, what's more common is secrets in CI variables. If their GitHub was breached, they should be more concerned with whether the attackers had access to GitHub Actions logs or secrets.


In this day and age, I really do not understand why one of the first steps when spinning up a new repo (for this type of app) is not leveraging a tool like dotenv, and then a config system that uses environment variables for things like db credentials/etc. Yes it takes another hour of time to get that all going but, in the long run, you'll thank yourself!

I've worked in places where all the code was open to all teams, with the exception of DevOps because they had too many hardcoded secrets and never had time to clean up their mess. I get it, they really were spread thin... but it should have never happened in the first place.


Okta employee here. I can assure you that there are no clear-text credentials in our source code.


If only they had managed their GitHub logins with Okta


I know it's probably sarcasm, but just in case: I don't think they should, otherwise they could find themselves in a situation where an outage prevents them from fixing the outage.


Similar to the Facebook outage earlier. Apparently everything is managed through Facebook accounts so employees could not get access to fix the issues.


I wonder what they use for SSO. Auth0 for Okta and vice versa?


Maybe services like Okta (at least the core part without UI, just like macOS & Darwin) should be open source by default? That would be a good policy to forcibly open source anything connected to encryption/authorization/authentication or PKI.

Whoever solen this code can sell it to criminals/evil countries/black hats around the world, so on Okta's place I would start scanning the code with all available static checkers and fix as much and as fast as I could.

Okta become quite popular among startups, I used it at my prev work and I use it at my current job. I even considering investing into its stock. My only complain is that it is somehow slow, otherwise UI is acceptable and Chrome plugin is OKish.


There's nothing in Okta that isn't already open-source in projects like Keycloak or Ory.

Okta is a terrible, expensive, and potentially incompetent (this is not their first breach) hosted knockoff of the above.


They are to auth what Zoom is to webconfs: Just Bad But Good At Marketing. In the winner-takes-all environment of the web economy, they must be making pretty good money.


To be fair, Zoom is the only modern conferencing software that actually has a native client instead of Electron garbage.

Back in the day many services had beautiful native clients (MSN/Windows Live, Skype pre-Microsoft, etc) that successfully delivered videoconferencing on single-core CPUs and sub-gigabyte RAM capacities but clearly it was worth sacrificing all that just to give Javascript developers a career.

Nowadays few companies besides Zoom seem to be able to replicate this experience despite processing power being more than double (not that Zoom is perfect, but it's the least bad).


Zoom's native client uses Qt WebEngine, so not Electron garbage but still Blink garbage.


Zoom legitimately has better video quality than everyone else. I wish they didn't but they do.


Zoom client also Just Works.

I can test the mic and video before joining. I can also change my mic and video source after joining.

You'd think that both are 100% basic features, but nope. Can't do either in Google Meet or Teams...


You can do that in Teams client for macOS at least. Teams is even smart enough to pick up new audio devices mid-meeting, if, say, you connect a new pair of headphones.


Didn't work the last time I had to have a meeting with my kid's teacher on Teams.

Had to literally quit the meeting to pick a new webcam, Teams wanted to pick the shitty laptop one instead of my dedicated camera on top of the main monitor. It also wanted to use the webcam's microphone and display's speakers instead of my actual speakers and proper microphone.


Teams definitely supports changing video and audio devices after joining a meeting, I use that feature almost daily. It’s not new.


Are you using the macOS client a browser client?


Meet does both of those things.


I disagree about Zoom. They managed to get big by being better than Skype and Teams. And that was who they needed to compete with.


Absolutely. Zoom is the first video conference software I have used where the first 15 minutes of the meeting is not spent on impromptu tech support to get everyone connected, seeing, and hearing everyone else. And I'm exaggerating only slightly.


They did that by breaking user privacy and security on an unprecedented level. That's why I said they are Just Bad - their practices are terrible. But they were good at selling (i.e. deflecting when challenged on their bad behaviour), and that worked. They are "better" than alternatives in the same way Oracle is "better" than other database vendors.


I haven't used lots of videoconference tools, but Zoom has a killer feature I haven't seen elsewhere:

The ability to put camera feeds on one screen and screenshare on another.

If you're using a laptop's built-in webcam, and you have external monitors, that makes the "video meeting with a screenshare" experience noticeably better.


I don't know how I feel about this. On the one hand Okta's position in the market makes them a high value target to hack. Between the Lapsus$ hack in April and this, Okta has had a rough year. At the same time I feel like they should know better, and I hated the way they handled the Lapsus$ breach, trying to hand wave the potential damage away


It's pretty clear how to feel about this: unsurprised.

Their internal infrastructure was pwned by some 16 year old chavs who broke into support software with unjustifiable "superuser" features accessible by any employee. And Okta never intended to reveal that to us until the chavs themselves made it known.

This is a company that "literally" has one job - to protect the security of user accounts and facilitate login. Their product is simple and its development velocity is extremely slow. As a developer who has integrated Okta into multiple products, I've never gotten the feeling it's being run by a competent crew of software engineers, nevermind security professionals.

It's entirely unsurprising to me that they would be compromised at this level. Not only are they a juicy target, but they've proven themselves generally incompetent at performing their basic job.


The only reason we picked okra is because a lot of our downstream vendors had Okta integration.

I’m starting to wonder if there’s some kind of incentive given out or whether it’s one of those “let’s support okta cuz everyone uses it.” And then SaaS customers also say “let’s use okta cuz all our SaaS products keep mentioning it”?


Wonder if we are ever going to known how this happened.

Curious to know if Okta themselves were using best practices such as only using a hardware token as 2FA (FIDO2) and disabling SMS 2FA and recovery.


You're assuming that the breach was done through the UI and not for example an oauth token or ssh key that was stolen from a developer's machine and used to download the source code by the attacker.


Another comment mentions GitHub themselves detecting the breach - in this case it's unlikely to be done via a compromised developer's laptop as the access would otherwise look normal and wouldn't trigger GH's security alerts.


Depends, if someone suddenly starts pulling down every single repository in the org, that should ring some bells.


Meh... I do this every 6-8 months as a principal engineer. I've had many legit use cases.: Understanding our overall dependency tree, validating code coverage assumptions, seeing which projects built still, testing out prototype profiler reports, inspecting the code to see how hard adding x pattern would be, quantifying code change patterns over the pandemic, seeing which uses of the AWS sdk or internal clients were instrumented with metrics, seeing what pct would build under make/go build/bazel/etc.

Anyway many legit reasons. Should it set off an alarm? Probably. Can you say before you do it? For sure!


Depends on the number of repositories I would assume. There are orgs with thousands of them.


Last I downloaded it was around 3600 of them.


I think that they have alerts for when an access token is found in the wild, for instance. So it is quite possible.


Why would it always look normal? Different IP, different usage patterns could trigger the alert.


It would look abnormal if it was accessed from a dubious geolocation compared to normal access, which are things github can track and detect.


[flagged]


That's the stupidest password I've ever heard in my life! That's the kind of thing an idiot would have on his luggage!


Remind me to change the combination on my luggage...


And yet it consistently ranks high on passwords still in use. This is a clickbaity article but there are better ones out there https://www.cnbc.com/2022/11/23/most-common-passwords-of-202...


Just FYI for those who missed it:

The previous comments were referring to the famous and really hilarious Mel Brooks movie "Spaceballs", this scene in particular:

https://youtu.be/a6iW-8xPw3k


I see that your schwartz is as big as mine


Now let's see how well you handle it.


Definetly not true. This can be the best password in certain conditions. You should not put your "good" passwords to any shady site out there. You have no idea how passwords are stored on all these platforms one is registering to. If you can live with the fact, that an account may be hacked, then go for a super easy password if you want.



Excusing the fact it was a joke, on the serious side, a person shouldn't have any kind of set of "good" passwords. They should just have secure passwords they have auto-generated and have some way of retrieving the password from where they are stored when needed.


Or, just have a unique password for every site stored in a password manager, and then they can all be “good” passwords, with no big concerns about how they are stored!


Other than your eggs in one basket password apps being hacked and exfiltrating them all which would never happen.

https://www.macrumors.com/2022/12/02/lastpass-hacked-second-...


"Our customers' passwords remain safely encrypted due to LastPass’s Zero Knowledge architecture."

I take your point but I'm not aware of any hack of a major provider which resulted in exfiltration of decrypted customer secrets. Providers often enumerate how they prevent exactly this scenario [1][2], but you'd be correct that if your endpoint were compromised, it's probably game over. To be fair in this scenario just typing in your password (not using a manager) would also be game over.

If you want other options, it is possible to self-host (i.e. Vaultwarden). Personally I've been using 1Password for a long time, and their "Families" offering [3] is exceptional for me and has meaningfully improved my family security since the UX is easy enough my loved ones don't find a unique password per site "a chore".

[1] https://support.1password.com/1password-security/ [2] https://1passwordstatic.com/files/security/1password-white-p... [3] https://1password.com/families/


All it takes is a supply chain attack and it's all gone


If you are following the policy of unique passwords per login then there is no need for "saving up the good passwords".


Yeah, everyone in the world is tech savvy enough to work with password managers. Reality looks different, trust me.


solarwinds123


IMO, it's a bit weird they would even host their source code on open-facing GitHub. Of course I don't know the whole story, but seems like a company dealing with security would benefit from hosting their on VCS, perhaps GitHub enterprise, behind a VPN..?


That's one of those weird questions: does Okta have more resources for knowing abnormal VCS behavior, or GitHub?

IMHO the network is no longer the correct security boundary, and I hope I never encounter a VPN in my career ever again


That would be contrary to Okta's zero-trust security model.


Free involuntary security audit


The results of the audit will be provided to 3rd parties, in the form of additional involuntary free audits.


Unplanned decentralized backup.


Write-only backups, from Okta's perspective


Not so fast, maybe the attackers will provide the backups for a small fee


Their public repos include .travis.yml files. Perhaps Okta is still feeling effects from the TravisCI breach?


The Early December timeline matches.


Centralized authentication systems. What could go wrong ?


Nothing is going wrong as long as adequately dissuasive penalties aren't being levied against companies with terrible security practices. It's business as usual.

Okta has been breached previously, with actual access to end-customer data, and so did Equifax, who leaked the credit reports of ~140M people. Both companies are still alive and thriving.


Didnt Okta have another security breach recently?


2021: office network breach via a security camera

2022: hacker group leaked customer information stolen from third part customer support engineer.

2022: this one

So not the worst breaches, but also definitely what you want happening to your identity provider.

Edit: formatting


The bigger concern to me is not Okta but rather the companies it acquired. One of them specifically integrates directly with Okta to manage agents running on production servers as root. This would be an incredibly valuable target, much more so than access to some web front-ends. In some cases the damage done through a web front end can be rolled back assuming an audit trail. Damage done to large numbers of servers in many companies as root would be quite challenging and time consuming to remediate and that is assuming tested backups are in place that root can not tamper with. I have failed to convince companies to have backups that root can not tamper with.

If I were Okta that is the first thing I would be locking down ASAP.


Locking down what, exactly?


I'm not seeing details of how, though. Was it some leaked tokens? Employee credentials?

Also shouldn't this be called a security incident rather than event.

https://advisera.com/27001academy/blog/2018/12/03/iso-27001-...


Everyone uses those words differently and no two groups agree. Event is often used since it’s the most generic. Also, many companies have requirements around their use of certain words and this might not meet their lawyers definition of incident.


A company I previously worked at would call this a CRI; critical room for improvement.


A good reminder to use fun and absurd names for things. Life's too short.


A former boss used to use the term OFE - Opportunity For Excellence :-)



Hackers claimed to have broken in 9 months ago in a different incident.

https://news.ycombinator.com/item?id=30762520


According to this article that was confirmed by Okta:

> Okta shortly acknowledged that the hack being referred to had in fact occurred late January 2022


Critically, it was confirmed by Okta only after the hackers went public, despite Okta having knowledge of the breach before then.


In the end somebody has to pay for all the additional investments into security and many are not willing to do that.

In addition, a secure system is a moving target and it is difficult to find the right balance.

Breaches like this are important for us to refocus. Thanks for your service, Okta!


Clearly someone is paying for additional investments into security. GitHub, that is.


Maybe the hackers will be able to fix some of the bugs Okta could never figure out how to fix!


So unprofessional for a security company to be hosting their code on GitHub and connecting with lots of silly integrations, access tokens, etc.

Okta has not learned their lesson with security with this hack, especially when they got themselves breached before in January.


How so? It refutes that right there in the advisory:

> Okta does not rely on the confidentiality of its source code as a means to secure its services.

Would it be more professional for them to host their code in a public repo?


Did they decide that that was their strategy before or after this breach occurred?

They were trying to keep their source code secure, and they failed. Not a great look for a company that ask you to trust them with all of your user accounts.


Yes? What do they have to gain from the code not being public?


Competitive advantage?


It might be, yes. With the right protections on pull requests.


That's not a bad point. GitHub is convenient, yes, but if you manage security for companies that are collectively worth billions, you want to minimize your attack surface.

In my experience, security is usually an afterthought in most tech companies. Just look at how many MongoDB instances in production were running on open ports with the default password, which was... no password.


It always amazes me how many companies host their source code in “the cloud” visible to external companies, accessible from the public internet, only a single bug/security vulnerability/human error away from being leaked. The companies I’ve worked for so far host their source code on their own servers, which are physically under their control, and aren’t accessible from the public internet. Add on top of it that functionally GitHub/GitLab don’t hold a candle to code review functionality of Gerrit, project management of Redmine/Jira; and the only explanation is that those companies really, really don’t want to hire 1/2 sysadmins to manage their infrastructure.


Companies host data a lot more valuable than source code in the cloud. Developers like to think we’re the center of the business, but access to a company’s Salesforce or NetSuite instance is a lot more economically valuable than access to a company’s GitHub org.


Those things are no less incredible to me than GitHub. (Also not used at the companies I’ve worked for so far.)


An alternative to using own servers, which are not feasible for every company would be isolated and encrypted GitLab/GitHub deployments as described here: https://dev.to/flxflx/setting-up-a-confidential-gitlab-333h


Companies with hundreds of sysadmins and billions of revenue or moving to the cloud... It has nothing to do with the cost of a handful of additional engineers.


At my jobs, employees routinely share passwords in Slack or use simple passwords.

Frankly, as an employee, security is not my problem.


Solving security issues and maintaining secure systems isn't your problem, but adhering to company policy most certainly is, and the policy almost certainly forbids you from sharing passwords (especially via Slack.)

Unless you're in a tiny or ass-backward company that doesn't even have a policy that covers such things.


They have a policy, but there is nobody who cares to enforce it.


Maybe they should consider being Github's secret scanning partner.


That goes double because Okta's token scoping is a raging dumpster fire, which forces customers to use God Tokens or GTFO. I hate it with all my heart


If Okta admits to being breached or having something stolen from them, I wonder how many months ago was that. Guess we'll have to wait for Cloudflare post to get reliable info.


Well, let's see how it goes for security by obfuscation; glhf


At what point are the banks hacked and we just don't know?


They are hacked, but you will never know or be personally affected. This is why banks are strictly regulated and have insurance for everything and agreements with whoever manage their IT and security.


You can't outsource accountability.


Oh yes you absolutely can. C-level execs routinely get D&O insurance that covers for everything financial... the only thing that a C-level exec can't get out of is jail time, but it's rare that courts hand out jail time.


That is responsibility. You outsource to, let's say, okta, okta takes an insurance. Payment goes down the chain.

The accountability to pick okta and check if they are, let's say within gdpr is on you!


> you will never know

maybe. GDPR has a notification of breach requirement. we shall see.


GDPR is only applies when personal data is involved. Banks usually have all client data well isolated from everyone including most of IT staff so credit card data being stolen isn't always mean that personal data is affected.


GDPR enforcement is laughable. Come back when Facebook and Google are fined out of existence and then you’ll have a point.



My point is that the “obligatory” link is getting annoying when tallying up the total amount of fines over the last 4 years across all companies is still less than the yearly profit of the single biggest offender.

And yes, the news you mention are welcome but I’m not holding my breath - they’ve been allowed to brazenly breach the regulation for 4 years and one of the regulators even ended up being corrupt and collaborated with them behind the scenes.


They get hacked and robbed regularly. But they’ll never disclose to prevent a run on the bank. There are just a lot of “service interruptions“, like online banking not working for a few hours


Don't most western countries have disclosure laws now? This doesn't sound right, not in Australia at least, it would be completely illegal to cover such a thing up and not notify the government cyber agencies.


If you google for "us bank data breach" you'll find plenty of examples of this happening and being disclosed. However banking regulations are strict and banks own systems are designed to be resistant to insider attacks. So everything usually designed to keep different kind of data within bank own systems isolated and audited.

In most of small and middle sized IT companies there is rarely any security measures against insider attacks or just accidental leaks if someone been too incompetent when it come to security. So leaks of sensetive data happen far more often since plenty of SWE and DevOps can easily get access to all the production databases.

As result deployment of some single line fix in code of some critical system in a major bank might easily take 3-6 months.


Telling the government is not the same as telling you, the consumer.

Do disclosure laws say anything about end users? Even if said users aren’t affected?


GDPR (so the whole of the EU+UK) does, yes.


In EU you have to inform proper officials and everyone who were impacted. Failure to do so may result in fine up to 10 mln euro or 2% global revenue. There's some reports - not that many. Honestly I would say that the banks are not very attractive target - it's not easy to extract reasonable amount of money that way (better to directly scam end user), and security is usually at least a bit above average, so for personal data it's easier to hack some e-commerce store.


Maybe some people have audited the fed!?


I don't think it's a surprise. Security is always a tradeoff between complexity and simplicty in software design.


When your product basically is security, that's not really a compromise you get to make...


Even if the hacker was able to start an Okta competitor with their source code, how many companies would actually buy it? Who would maintain it? Or am I missing something here, what would they use the source code for?


My first thought: it's a lot easier to find vulnerabilities if you have the source code


Exactly. Why would I start a competitor when I could easily sell zero days to dictatorial regimes and make millions or possible billions? Or utilize them myself?


First of all why do you think that everything have to be done to profit off it? A lot of security research including blackhat one is done just for fun and might be it was some kind of white hat attempt gone wrong and was never reported to them as result.

Also it very much possible that whoever "hacked" their GitHub account just stumbled into some important API token accidentally left in public by someone of their employees.

And even if there is malicious intent there are more ways to use it. E.g Okta is public company and whoever have access to it confidencial data can manipulate their stock.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: