Hacker News new | past | comments | ask | show | jobs | submit login
Hackers claim to have breached Okta systems (twitter.com/_mg_)
1158 points by obi1kenobi on March 22, 2022 | hide | past | favorite | 375 comments

For people wondering about how they get access to these systems, it feels like they use the weakest human link in the chain.

I checked their telegram group and I can see that they specifically recruit people with access to VPNs/internal support systems. This okta breach seems to have happened through similar means. The group has made specific calls for access to gaming companies, hosting providers, telcos, call centers, and bpm providers. They offer payment.

It doesn’t feel difficult to see how some overworked, underpaid support agent (or even a well paid disgruntled one) might decide to go with this. Just takes 1 well placed agent to give creds and this group has access to a huge attack surface instantly. This might sound like an overreaction, but corporations in the future might need to make least privilege access and access logging everything a priority from day 1.

https://t.me/minsaudebr/162 (Link to the recruitment message)

The screenshots show the account of someone who has a LinkedIn profile identifying themselves as a Tier 2 Technical Services Engineer [1] working for an outsourced services provider in Costa Rica. They've been in the job for just over a year according to LinkedIn. This person states they provide troubleshooting assistance to Okta customers, and therefore may have needed elevated permissions over a number of Okta internal systems.

This raises a number of architectural questions for Okta:

- Can a support engineer for customers in one region access the systems/databases relied upon by customers in other regions?

- Do employees get notified when their account is logged into or used (e-mail/SMS/otherwise)?

- Do customers get notified when a support engineer accesses (with privileges) a system/database the customer relies upon?

[1] https://work180.com/en-US/for-women/job/215617/tier-2-techni...

Region is a red herring, except for government threats.

Otherwise, the goal is to give each agent access to an arbitrary subset of users, so it's harder to recruit an agent who can access the "target" victim customer.

It should be noted that this comes with significant (as in worth consideration, not necessarily huge) operations and logistical issues. Here's 3:

* Balancing the assignment of users to appropriate agents

* Less flexibility for agents to be specialized (instead of an agent responding to all X related tickets everywhere in the world, an agent must respond to X, Y, and Z from these specific users)

* Handling user coverage when agents are busy, PTO, etc. The spirit of splitting agent access would mean agents would need to request access from an out-of-region authority in order to help with things last minute. An authority who will be expected to be responsive 24/7 without also being an attack vector

For the third point, couldn't you have some overlap between agents, so that each customer is served by N agents, each agent having a different set of M customers ? It is super expensive as an arrangement though.

Yeah that would work, but like you said it is expensive - or at least it is a cost that doesnt exist in the current model and would need specific attention. And we cannot assume customer requests / issues are uniformly distributed - it's not unlikely that 80% of issues come from 20% of customers. But if you give 80% of agents access to the same 20% of customers - you havent really solved the original problem. Albeit the risk factor is certainly lower and I would prefer this setup, who knows if that justifies the additional overhead and access control management.

Technically, the current model is a full M customer x N agents mapping. All customers served by all Agents. Adding some 0's to that mapping should certainly be possible without sacrificing much from current system. No way to know how many is reasonable

All that said, there is a natural logic that arises from this 'analysis' - Limit access to customers who are inactive, or who rarely have issues. A common sense addition would be to raise their default priority for when they do interact.

== Information on mitigating security controls in place that could have limited the severity of the incident ==

The screenshots show that LAPSUS$ had access via the account of the outsourced Costa Rican based Tier 2 TSE worker to view details of and reset the password of a UK-based site reliability engineer of a well known Internet global infrastructure company. This level of access in itself doesn't sound hugely problematic on its own if it weren't for job applications for Okta Tier 2 TSE workers indicating server/network administration responsibilities. There is some publicly published information that provides more context on what access a Tier 2 TSE worker may have to customer data:

- Okta's security whitepaper [1] states that multi-factor authentication is needed for administrative access to AWS instances (host operating systems) so this could have possibly prevented LAPSUS$ from using their access to do anything more sinister than perhaps causing customer organisation-wide outages by disabling/resetting all their user accounts. The response for control AC-17(06) at [3] indicates that remote access requires valid multi-factor authentication to occur so if LAPSUS$ could gain remote access, perhaps they had already gained the ability to generate valid multi-factor authentication tokens, or otherwise bypass multi-factor authentication for remote access.

- The response for controls IA-02(09), AC-06(07), PS-04(01) and SA-12 at [3] indicates that Okta employee access to customer data is deemed privileged access and that fewer than 30 Okta employees have this level of privileged access to this customer data. I doubt this is true, unless the SP 800-53r4 responses at [3] are only referring to the US government tenancies that are hosted separately from tenancies of global companies.

- The response for controls SI-04(12), SI-04(19) and SI-04(20) at [3] indicates that changes made to a customer tenancy (for example resetting passwords for customer users as shown in the LAPSUS$ screenshots) should have triggered events being sent to both Okta's Splunk SIEM system and also to a customer SIEM system if they have set one up. Theoretically this should quickly raise a red flag to the customer if LAPSUS$ were to use this access to bulk deny the customer access to their ICT systems. It is however not clear that privileged access to the host operating system for the AWS instance would cause an event to be sent to the customer's SIEM system.

== Level of segregation of customer tenancies and whether LAPSUS$ may have been using Okta access to attack customers ==

It's fairly straightforward to enumerate the Okta hosting arrangements from DNS records as listed below. The sample targeted company in the screenshots has their Okta tenancy hosted within AWS region us-west-2 (Okta OK7) and other screenshots showed that the compromised account had recently accessed privileged access tools for OK3, OK8 and OK12. Thus it would appear that a Costa Rican-based Tier 2 TSE worker would have access to customer information within perhaps any of the clusters (not geographically restricted).

  ok-crtr-tls1-nlb-36253386bec48ce6.elb.us-east-1.amazonaws.com - response when a tenancy is invalid/doesn't exist
  ok2-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.us-east-1.amazonaws.com - companies (global)
  ok3-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.us-east-1.amazonaws.com - companies (global)
  ok4-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.us-east-1.amazonaws.com - companies (global)
  ok5-crtr-tls12-fips-nlb-xxxxxxxxxxxxxxxx.elb.us-west-2.amazonaws.com - US government agencies
  ok6-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.us-east-2.amazonaws.com - companies (global)
  ok7-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.us-west-2.amazonaws.com - companies (global)
  ok8-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.ap-southeast-2.amazonaws.com - Australian government agencies and some other Australian companies/associations
  ok9-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.eu-west-1.amazonaws.com - unknown / no customers found
  ok10-crtr-tls12-fips-nlb-xxxxxxxxxxxxxxxx.elb.us-east-2.amazonaws.com - US government agencies
  ok11-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.us-east-2.amazonaws.com - unknown / no customers found
  ok12-crtr-tls12-nlb-xxxxxxxxxxxxxxxx.elb.us-west-2.amazonaws.com - unknown / no customers found
To attempt to answer a question in another comment, whether LAPSUS$ could have been compromising other companies via Okta:

  microsoft.okta.com - likely customer - ok3-crtr-tls12-nlb-dfef298ffc8f82ca.elb.us-east-1.amazonaws.com
  nvidia.okta.com - likely customer - ok2-crtr-tls12-nlb-acd62f33a2a6a463.elb.us-east-1.amazonaws.com
  vodafone.okta.com - confirmed customer [4] - ok4-crtr-tls12-nlb-29367a8e4bb80716.elb.us-east-1.amazonaws.com
  samsung.okta.com [5] - possibly not a customer - ok-crtr-tls1-nlb-36253386bec48ce6.elb.us-east-1.amazonaws.com
  ubisoft.okta.com - possbily not a customer - ok-crtr-tls1-nlb-36253386bec48ce6.elb.us-east-1.amazonaws.com
  lg.okta.com [5] - possibly not a customer - ok-crtr-tls1-nlb-36253386bec48ce6.elb.us-east-1.amazonaws.com
... so, probably not?

[1] https://www.okta.com/resources/whitepaper/okta-security-tech...

[2] https://trust.okta.com/security/

[3] https://www.okta.com/resources/whitepaper/using-okta-to-prot...

[4] https://miniorange.com/atlassian/atlassian-single-sign-on-ss...

[5] Note: also tried similar names known to be used by company divisions.

I wish this was posted in the okta security incident page or something like it.

> Can a support engineer for customers in one region access the systems/databases relied upon by customers in other regions?

My (admittedly outdated) experience is that this is always the case, regardless of any assurance vendors might have provided to the customer. In the end, people have to get stuff done, and offshore bodies are just too cheap to pass on. Things might have changed a bit since GDPR but, in practice, I expect there will always be "channels" for people to reach out cross-region.

Also "timezones".

Just because a customer is based in Europe doesn't mean they don't need to get a hold of support at 03:00 local time. Even small, growing, startups now have HQs in one part of the world and staff across the globe due to the ease of movement many enjoy.

> Do employees get notified when their account is logged into or used (e-mail/SMS/otherwise)?

No. It's a tricky situation too, since even if one did receive such email, I won't look at work email until working hours. So even with such a notification, if you logged in on friday 20:00, you've over 50 hours to do your work.

I might also add that Okta DOES NOT support any form of physical 2FA (e.g.: yubikeys), only software-based ones.

> I might also add that Okta DOES NOT support any form of physical 2FA (e.g.: yubikeys), only software-based ones.

Not sure where you got this from, but we've been using yubikeys for a few months now. They even allow policies to only allow physical 2FA methods.

> I might also add that Okta DOES NOT support any form of physical 2FA (e.g.: yubikeys), only software-based ones.

You might be operating under outdated information. I absolutely use a YubiKey with my Okta account.

They, in fact, support many forms of 2fa, your org just doesn't have them enabled.

“Do they have access” is one thing, but what happens when they use the access? Is there any logging or proactive monitoring?

A single support account reading say 10x the normal number of accounts per day, with a different geographic distribution than the norm, should ring alarm bells. It’s not enough to say “well shucks they have access to the data, guess we’ll do nothing”

I agree with you, put support agents in the panopticon. You could go as far as to give them access to each others logs and enable a bounty system for reporting (verified) abuse anonymously.

I agree - this should be super important especially for a company like Okta

People outsource auth to a specialised company (okta) and they outsource tier-1/2 support to a specialised company (because they are a multi-tenant B2B SaaS co.) If this were not a SaaS product, instead was a well-tested, audited and certified software package that was bought and run in-house by FTEs (and audited periodically), this additional complexity could have been avoided and this breach would have been much harder.

As an industry I don't know if we have made rational choices along the way.

For most companies, hiring good quality FTEs is complicated, and very expensive. It makes sense, in many cases, to delegate some things to external vendors ( SaaS companies). There is a line to be drawn somewhere, and it will vary between companies and industries - e.g. a tech company could probably easily handle auth in-house, while i wouldn't expect a law firm or whatever to handle that ( and i wouldn't want them to).

> For most companies, hiring good quality FTEs is complicated, and very expensive.

This is cope - managers repeat this narrative so they can blame some other factor rather than having to put in the work to fix the problem of hiring good FTEs. Unfortunately, this narrative has been repeated so often that it is perceived to be axiomatic, which is why it is repeated without any criticism.

No, its not that hard to find good FTEs, you just haven't tried hard enough, or you're not paying enough, or your glassdoor reviews are bad.

>or you're not paying enough

this is not a negligible barrier. Sure you could argue that a business that isnt successful enough to hire for that expensive FTE maybe shouldnt exist at all, but a lot more is possible by being able to outsource it. And there really isnt a good reason why it cannot be outsourced from an operations standpoint - it is entirely a security problem. It's not like auth is hard to do for a third party.

Yeah, it's not surpising at all that someone managed to build a business selling auth for less than hundreds of thousands of dollars a year.

Probably a lot cheaper than the fire pit they just walked into.

In the long run, yes. But people are often promoted mostly based on short term impacts. Especially visible ones.

e.g. nobody got a promotion because the change they pushed for 5 years ago prevented an attack today.

That's not to say people are just greedy and only chase what gets them a promotion, but replace "promotion" with "received positive feedback".

As an industry, we have abundance of choice. You're not practically nor theoretically locked to SaaS vendors - there are equally good self-hosted open source auth solutions.

A portion of our industry seems to be bad at risk calculation, but that's not inherent to the SaaS model - this app very well could've been made with strictly separated tenants.

Realistically though we are often locked into terrible software that no one wants because a SVP was wined and dined by the vendor.

Definitely not and it will hit us big time imho. We are one freak incident away from abandoning npm/pypi completely and completely re-thinking how libraries are being done/used.

If Okta can't hire competent staff and build a secure environment, how can you expect every other company in the world to do so?

> well-tested, audited and certified software package

Who is selling this?

To be fair to Okta, nothing is ever 100% secure.

that is such a lazy statement

JavaScript is the most popular language. We have not made rational choices at all.

I wonder if it would be possible to counter this with policy, something like:

> If you receive a credible bribe and report it immediately to X we will report to relevant authorities and provide a bonus equal to the value of the bribe (capped at Y).

Sounds like a recipe for a "Cobra Effect"[1] where suddenly, credible bribes grow by 10x!

[1] https://en.wikipedia.org/wiki/Perverse_incentive

There's always the fable about the boss who declares that being even a few minutes late to work will cost employees the whole hour's pay, causing them to be reliably an hour late, all the time.

Works both ways. If you keep me in the office for a few extra minutes after quitting time I'm claiming an hour of overtime.

In the Cobra scenario, breeding a Cobra has no "cost" beyond the breeding of a Cobra.

If you reported a fraudulent bribe, which was reported to the police there is a risk the briber and you both are arrested.

It's however trivial to manufacture "real" bribes - hell someone could build an automated spambot spamming bribes to everyone as an act of charity.

That would fail the "credible" test for reward.

So now you expect everyone to engage with their bribers to determine credibility?

In addition, how do you even determine credibility when even real attackers might renege and not deliver on their promise of a bribe?

This sounds like trying to get rid of rats by paying for dead rats, someone will start farming rats.

I see only trouble with this approach.

Just hire good people, treat them right, have sensible audit and monitoring procedures.

You used the “just” word there. Minimizing the difficulty of complex problems is a good way to discredit yourself.

How do you tell competent employees from incompetent ones? By the number of certificates? By the number years in industry? By school? By leetcode score? By salary asked?

It's very difficult.

By paying attention.

edit: By which I mean 1) audit your processes and 2) get to know your people.

Processes require good engineering to thrive; people require good management. When you confuse the two, you get problems.

I have read more than one story of rat-catching incentives ultimately leading to discovery of rat farming.

Another misaligned incentive is to have a Bug Bounty program, but offer ridiculous low compensation, for highly critical findings that show complete compromise of your platform. That will make it clear you don't take security as a high priority, and the Bug Bounty was just to hit a compliance check list.

Some will take it as a sign they should poke further your system, or shop around for others paying better.

> or shop around for others paying better.

That's kind of a weird moral take on things. If you don't like the bounties offered then simply don't invest time in the platform looking for things to exploit.

That was not a statement about my morals, was about the moral of others...

Companies with insulting bug bounty programs deserve to get hacked.

The Bug Bounty Program effectively protects them from that.

You wouldn't know how insulting that is before going through a good of paperwork that legally tames your desires.

If you have access to valuable assets of a company, you must be handsomely paid to avoid stuff like bribes.

Industries ex software know that since time immemorial.

Prob the software way of solving that is with least privileged access, automatic auditing and approvals.

I wonder why not even security companies are doing that correctly though.

For insider attacks auditing is really critical. At some level there will always have to be human beings with superuser access. But if every admin access and action is logged, audited and tied to a specific person then it becomes very hard to abuse that access without getting caught.

Superusers can disable logging, but yea the risk can be reduced.

Same reason why everything is shoddy: customers buy the cheapest or free solution and don't care if it fails.

Why should a business care if it gets hacked? It's the users that suffer and they don't have an alternative.

To add to the other comments this also assumes trust in the employer. If you’re disgruntled and don’t trust your employer than this policy falls flat.

Easy to game the system and offer credible bribes to yourself to collect the bonus. Just giving it some brief thought and it doesn't seem hard to make the chances of getting caught really low.

If not for the threat of involving authorities I might agree with you. How many people would really risk up to 7 years imprisonment (my countries max) to make a bit of money?

With an incredibly low chance of being caught? Many people. Factor in that the initial ask is often something that seems relatively victimless, or at least, not hurtful to any specific person, and ethical flexibility is even easier to come by.

Most people worldwide will take the moral equivalent of a bribe. It may require a sum so large as to be implausible for it to ever happen, but anybody that tells you they wouldn't at least consider illegal activities for life-altering amounts of money is statistically a liar. Millions of people have done and will continue to do it for seemingly trivial amounts of money.

> How many people would really risk up to 7 years imprisonment (my countries max) to make a bit of money?

The prisons of your country are full of such people.

A surprising amount of people, probably.

Sadly a whistleblower protection becomes an organized victim blaming too fast and easily

This assumes money is the only factor and motivation.

> This assumes money is the only factor and motivation.

Only if you assert that it is only useful if it solves 100% of the cases.

There are other motivations, but money is a big one. If you could block X% for a large value of X (I'd say easily > 50) then you're better off than before, even if some insiders driven by other motivations still remain.

I wouldn't say easily over 50%. For example https://www.varonis.com/blog/inside-world-insider-threats-pa... says roughly about half, and that sample might be biased as i assume its easier to take someone to court if they did it for money.

Still your point stands that money is a common factor. However you still have the problem that bribes usually aren't a one time payment but an ongoing thing, so a purely financially motivated adversary would take the total value into account.

Or you could foster a good personal relationship with your employees. If that's impossible for you, some will inevitably hate you enough that they will take the bribe anyway.

You can even go further: tell your employees that you'll be sending out fake test bribe offers, which they'll be expected to report, and that they'll be rewarded for doing so.

Similar to how IT in many large companies nowadays runs internal phishing tests.

That changes the risk/reward, so while every reported bribe is a nice cash bonus, accepting any received bribe becomes significantly riskier, and by 'practicing' bribe reporting it becomes the default reaction.

when I'm reading any public chat speculating about how X was hacked, and I see the phrase "their Telegram group" I generally stop reading. lol

(not a criticism of the comment above me, btw. not point I was making.)

What point where you making?

And this explains how the LAPSUS hacker group was able to breach Microsoft, NVIDIA, and countless others. Apparently, they have been inside of OKTA systems for nearly 2 months now.

[1] https://www.theregister.com/2022/03/21/microsoft_lapsus_brea...

[2] https://www.wired.com/story/lapsus-hacking-group-extortion-n...

[3] https://analyticsindiamag.com/lapsus-hack-leaves-nvidia-in-a...

Pretty sure Microsoft uses Azure Active Directory for SSO, not a direct competitor (source: used to work there).

That's generally true but the thing about Microsoft is that it's so huge I wouldn't be surprised at them being customers of a lot of SaaS offerings. Could be via an acquired company that is partially integrated? Or some kind of federated login for partners.

This is a reasonable take in my view. When you acquire a company, you don't get to replace their entire tech stack on day 1.

And sometimes there are chains of dependencies that need to be swapped out in sequence.

So from acquisitions alone it would seem reasonable to expect that Microsoft at any point in time has at least a handful of usages of certain SaaS offerings, even if MS themselves offer their own versions of those services.

That said, the surface area of these "unofficial" 3rd party systems is likely to be quite small in comparison to whatever their official tech stack looks like (which does -- I would assume -- centre around Azure AD). I'm not expressing an opinion on whether the GP is correct to make this link...

I've worked with some very large companies where internally they (surprisingly to me) make use of competitors' products even when that company makes their own version of that product. There are all sorts of legitimate reasons why this happens.

Euh, nope.

It would be confusing to me if Microsoft were utilising Okta themselves. Their own Azure offerings provide everything Okta does (and usually bundled into something you're already buying).

Over the weekend, LAPSUS$ posted a screenshot to its Telegram channel of what appeared to be information taken from an internal developer account for Azure, its cloud computing division.


Yes I've seen that, but I don't see how people are making the leap to assume that this Azure leak came through Okta.

I'd be kinda shocked to find that Okta isn't anywhere in Microsoft, just given acquisitions. But yeah, no evidence that it's directly related.

Acquisitions get authentication replaced fast (well, fast-ish, we got a year to change over). It might be around somewhere, but, AAD is used nearly universally and I've never seen Okta.

GitHub used Okta, it’s been a while so possibly replaced, but one example.

Azure B2C is a direct competitor to Okta... If MS isnt dogfooding this, when what would they dogfood?

Azure B2C is awful though

I haven't had direct experience with Azure B2C, why is it awful?

Laborious, confusing, error-prone and extremely extensively but poorly documented.

Do yourself a favor and stay away from it.

Azure is awful in general. Microsoft can't even copy other clouds efficiently.

In my admittedly limited experience it is “alright”, and their sales people don’t pretend that it’s magic like the AWS sales people do.

Honestly I’m so unimpressed with AWS that I find it hard to say Azure is terrible, because the bar for terrible is set much lower.

Seems a bit premature to assume this was the vector used for those other breaches.

One possible route: a Microsoft employee using an Okta protected product/service.

That other product/service was compromised, which let hackers hack into MS employee’s computer, which gave them access to Azure infrastructure.

That seems like an attack vector unrelated to the Okta breach -- if it was possible to attack the MS employee's computer from a compromised app, then whether or not that app was protected by Okta seems immaterial.

Perhaps the app was not compromised, to begin with. Hackers might’ve gotten into the build infrastructure of the app, rewrote it to make it compromised, and then all the users of that app are now compromised.

Lesson: ask your (in this case Microsoft) employees to not use work computers for any personal use.

/me goes to check if my password manager company uses Okta.

Again, this is not related to an Okta hack, employees could be using an app that has nothing to do with Okta and be compromised through this same attack vector.

This is just wild speculation to make up a scenario where Microsoft could be affected by the Okta hack with no evidence to back it up.

Agreed. It’s all speculation at this point. I hope I did not make it sound like I have any evidence to backup my speculation.

I was just trying to come up with a threat vector to explain how it might be possible.

What about the build infra of an app would provide creds to log into ADO?

The screenshots are very worrisome. In the pictures below, LAPSUS$ appears to have gotten access to the @Cloudflare tenant with the ability to reset employee passwords

[1] https://twitter.com/BillDemirkapi/status/1506109956298317830...

It's not particularly worrisome as shown in the screenshot. It appears the attacker could only send a password reset link or temporary password to the e-mail address of the user account. There is nothing in the screenshot indicating that the attacker could have changed the e-mail address to one controlled by the attacker, thus being able to reset the user's account.

Is it reasonable that they been inside Okta for 2 months and Okta hasn't noticed?

The well hidden secret among large tech companies is that persistent threats are quite often discovered after ~6 months active presence going unnoticed. Some persist for years.

much longer than that sadly. now guess what most companies set log retention to?

2 weeks?

Yes, this is very common especially when you're living off the land. You take over the one unmonitored host and then you use the internal tools as if you were a normal employee.

"the quieter you are, the more you will hear"

And this is why I would ultimately never trust a centralised company with our authentication infrastructure: because something like Okta is an infinitely more attractive target than we are. Their offering is sweet, and I’m always tempted to just give in, but this confirms me in my decision.

95% of companies on the planet do not have the necessary resources or expertise to run a robust authentication infrastructure, and this one hack (which we really know nothing about at this point) doesn't change that fact.

If you are in the 5%, great! You are probably set already.

If you are not, overreacting isn't going to help anyone. There's no need to start writing user login code tonight.

> There's no need to start writing user login code tonight.

I think it's worth pointing out that replacing Okta for most people isn't just doing login +MFA stuff.

Sure, there's that part of it - but a lot of the value in going with Okta in particular is that so many third parties support integrating with them in ways that other Identity providers are just not supported.

Account lifecycle stuff (via SCIM) is often limited to just Okta support for products.

There's also the account lifecycle stuff so that HR doesn't have to tell you when they hire someone, they just add them into the HR system and all the things flow out from that. Similarly when someone leaves, you can have those workflows go and disable/delete their accounts.

But Okta is just hosted SAML or OIDC. The same can be achieved with self-hosted Keycloak.

In fact, a third-party’s “Okta integration” should work out of the box with Keycloak or any other IdP.

No, Okta also supports various forms of 2FA/MFA, Active Directory integration, LDAP and SCIM user provisioning. Not to mention the fact that even if there are open source alternatives out there, setting up servers, configuring them correctly and managing/updating them isn't exactly a simple enough operation for the average company admin.

Keycloak has nearly all of this built in. You don’t need to be a crack sysadmin to set it up. If all you need is a 2FA-enabled OIDC endpoint federating to AD/LDAP to serve as SSO you can be up and running quite quickly.

[1] 2FA/Time based OTP: https://www.keycloak.org/docs/12.0/server_admin/keycloak-ima...

[2] LDAP/AD (Kerberos) https://dmc.datical.com/z_resources/images/assets/screenshot...

[3] SCIM (third party) https://github.com/Captain-P-Goldfish/scim-for-keycloak

Re SCIM: That plugin is adding a SCIM provider to Keycloak. i.e provisioning/managing users in KeyCloak

Most of the time I'm talking about SCIM outbound, so provisioning/managing users in some other random service (like Trello, Github, whatever).

That's the major value-add for Okta.

I see. Thank you for clarifying. Here’s the author of the Keycloak SCIM plugin on that functionality:


So will likely require some changes to Keycloak internals in order to address that reliability issue, which will enable building a reliable interface for your use case.

Keycloak supports 2FA/MFA, Active Directory, LDAP. You might need a plugin for SCIM, but it does a whole load of other wonderful things, and it doesn't cost $2 per user per month.

You will have to find someone to host and patch it, but we shouldn't pretend it's up there with write your own crypto.

Using saas is fine too. Often there cheaper than hiring a person until you need said person anyway, and at that point they're just a single point of failure, and people do underestimate the effort to run a production service well.

But we shouldn't be so anti-intellectual that we pretend that it's the only way to do things, and that well supported foss products don't exist.

> even if there are open source alternatives out there

This reads to me like you are unfamiliar with keycloak, which is a very mature and capable solution with plenty of support for everything you mention other than SCIM.

Hosting it is not overly challenging, running it in a container would suffice for most companies. Updates are in my experience a smooth process.

I'm not claiming it's as easy as an "auth as a service" platform, but as we see, that comes with its own costs.

for companies that are alrady gsuite users, what does okta provide over what one can get with gsuite saml integration?

GSuite's SCIM options are lacking.

So for example if you want to use AWS SSO tied to GSuite, you have to manually manage users, group membership, etc inside AWS SSO.

Similarly for a bunch of other products.

A couple of companies I've worked for now use GSuite as their primary accounts, and while a ton of things authenticate through GSuite (either as a built-in integration, or as SAML) - when you onboard/offboard users it involves going into a bunch of different systems to create/invite, set up group memberships, and then later deactivate accounts.

We're still considering Okta to solve all that workflow stuff. Yes, even after this event.

I believe Okta can also do custom security flows that GSuite can't.

So, if you're on a trusted device+network, then you might not need a 2FA challenge. Or conversely if you're trying to access a sensitive service - then always require a 2FA challenge every X hours.

yes my company makes us do the 2FA every 9 hours for github (so annoying) via okta, though apparently our gsuite logins via okta are much longer lived.

though that sort of answers a followup Q, it would seem that there would be a market that punts the authentication to gsuite's saml, but automates the user creation and the like on the different services (and deactivation), but one still wouldn't have the "custom flows" as you said.

but if the custom flows aren't that important, I'd rather trust google to handle my authentication than okta, and in my pseudo-okta without authentication scenario, even access to my pseudo-okta would authenticate via google.

Yes, you can achieve that specific part with Keycloak, and you may end up owned in a different, but equally awful way: https://www.keycloak.org/2021/12/cve There's nothing wrong with Keycloak, and me pointing at this CVE isn't meant as a gotcha. The point is that just because you're doing it yourself doesn't mean you're going to have better protection. Doing this well is difficult.

Okta is a security company, and due to a well-organized, well-funded attacker, they were able to be breached. If a similar attacker is trying to breach you, you're likely going to get breached.

> If a similar attacker is trying to breach you, you're likely going to get breached.

I think the logic here is accepting that if an attacker is trying to breach you, you’re likely going to get breached. But, if an attacker is trying to breach someone else, you’re still safe(-ish). With a centralized service, you don’t have to be the target to feel the effect.

But are $YOU, as one of the tens of thousand $YOU companies going to be spear headed in a dedicated, not automated way? Maybe, but I would say there is at least one or even two orders of magnitude difference with Okta or other well-known companies.

There's a cost efficiency point where this argument becomes valid.

At small company sizes, where dedicated security team or an admin with sufficient expertise to do proper setup is unaffordable, risk of Okta breach is lower than risk of automated custom setup breach. When you are big enough to deploy self-hosted solution and maintain it for reasonable cost, then yes, I would agree with you, self-hosted setup may reduce risks. When does that happen? When your company has over 1000 people (to break even with your Okta spending) or when your revenue per employee is high enough to spend every year 200-300k€ just on security.

I would think that 200-300k€/yr is low for this. At that budget for security you've maybe got two or three people, a bunch of software, and you're still building out monitoring and alerting systems. You haven't even begun to shift left and you're nowhere near being able to influence the product development lifecycle. You almost certainly don't have a security operations group capable of running a critical service 24/7.

That said, I would think Keycloak would fall under IT in many orgs. So you need a security org mature enough to have trained IT to the point where they can successfully run something this sensitive and security can have confidence in everything around it.

At a guess, that's not before you have 15-20 people in the security org. Probably something like: alerting and incident response (i.e., ops) is 3-5, vendor management 2-3, governance/audit 2-3, security engineering 3-5, security architecture 2-3, management 2-3, privacy 1-2. Run something like this for a year or two and see how the business as a whole shapes up.

As you say, it's a cost question. At what point does running your own IdP become the most valuable thing your company could do with a given chunk of money? One of the perks of a vendor is that the contract comes with liability. How much is that worth to the company?

Makes sense. Basically you are talking about 1.5-3M in annual spending just on salaries. With average IT spending around 3% in retail and manufacturing companies this will mean, that they must have several billions of annual revenue and security will still be a significant part of their IT budget. For software companies this calculation will be completely different and the threshold can be lower.

If you use anything off the shelf like Keycloak, you're 1 CVE away from being automatically breached too, which is very similar to the situation you'd be in from Okta etc. being breached.

I agree, but it depends on the type of attack. Apparently this LAPSUS$ group uses corruptible employees/contractors to get in. As a small employer with a personal relationship to my employees I have an advantage that I might not want to throw away by delegating security to a company that doesn't have this advantage.

Okta delegated its operations, not its security; it just happens that their operations is your security.

Your company isn't delegating any operations away? Not even support? If they are, what level of access do those people have? Are the internal services they have access to as secure as your perimeter?

Even if they aren't, the attack can come through any internal employee, and your least security conscious person is your weakest link. Again, what do they have access to? Will they be able to use them to move laterally? Spear phishing from an internal employee to another employee is really, really effective.

Comparing my microscopic Keycloak installation to Okta, I'd rather be a needle than a haystack.

Your keycloak instance will be found and indexed via automated scans. It will then be attacked minutes after the next CVE in keycloak, Java, Tomcat, or JBoss is disclosed. If you don’t have the 24x7 security team to handle that reality a managed service is likely a better option.

That's what I meant by being a needle rather than a haystack -- one of many Keycloak instances rather than client to one big SaaS provider.

BTW Keycloak.current has tossed JBoss & friends for Quarkus. In any case, it's been a remarkably safe product over the years.

It doesn’t matter if you’re a “needle” if Shodan has a list of all “needles” readily available to attack. Even script kiddies can write for loops.

Proxyshell and similar recent issues have shown “near-instant compromise” to be the current state of affairs. Most instances are attacked within hours or even days before a vulnerability is disclosed publicly and hits the news.

It's not just that. The most important part of Okta is the directory integration, not the SSO part. For small companies that have never acquired anyone, it doesn't matter.

It also makes user lifecycle someone else's problem. Very important. As a medium and larger sized company, when you have to tell your customers your homegrown IAM was pwned, you're in for a world of hurt. When you're just in the boat with everyone else who got popped, it's an easily recoverable incident.

If you have any sort of footprint, it's pretty much expected you'd have protective monitoring so you can articulate what these pwned accounts have actually achieved on your infrastructure.

It doesn't really matter who runs your auth service, if the auth service getting pwned results in a reportable attack, you're in for a world of hurt.

The next step is to articulate why this isn't your fault and that you did everything you reasonbly could have done to not get pwned.

Do you have evidence to support you did your due dilligance on your provider?

Time for B2B services to support self-hosted installations so the security boundary stays in house.

> 95% of companies on the planet do not have the necessary resources or expertise to run a robust authentication infrastructure

Wait, what exactly makes this problem so fundamentally hard?

What stops someone from just releasing a software system that solves the authentication infrastructure problem in a general way (e.g. as Kubernetes did for cloud orchestration)?

Economic incentives and strongly held technical opinions like those of paxys occupying staff+ positions at every participant in modern computing. We’ve deluded ourselves as an industry into believing that computers are too hard to build operate for anyone but Marvel’s Avengers of the Bay Area. That lie underpins the entire market capitalization of public cloud and it’s amazing how little clothing that lie actually wears in the end.

Kubernetes doesn’t go after it, either; it complicates your system so exponentially that it drives the same lie. All Kubernetes does is assign a workload to a machine and hook crap up to it. That problem was solved in the 1960s by systems that don’t take multiple FTEs just to operate itself. When you look at Kubernetes in this light there’s a moment of clarity waiting regarding why it exists in the first place, and why it will never in a millennium replace Borg.

In 2010, I joined a social media startup. I was the only operations employee on the team of nine who had ever touched an actual server with my hands. We’re on the third or fourth generation of systems people now who’ve never heard anything except the cloud-native drum and how they’ll never be qualified to run their fingers along the faceplate of the computers running their business. paxys is sharing a derivative of that because it is a strongly held industry belief and a safe language to speak in this market. It just doesn’t make it true, and that’s a powerful realization, but that’s not something most people are ready to hear and truly understand.

You’re asking the absolutely right questions. You’re just asking an industry that gave up on itself so long ago that kids born after UTAH2000 can vote now.

same problem as centralization, it's a very attractive target for attacks, and if it's broken, you are compromised too

I don't understand the centralization argument. If anything, software solutions should make systems less centralized, since anyone will be able to run their own auth infrastructure on their own hardware.

If you mean that a single codebase could be an attractive target for attacks, it's still not a problem since the Free Software community loves having a dozen of competing implementations of the same thing just for the sake of it :)

Sorry, I don't mean that the actual system is centralized, as in everybody pings back to the same server or organization, but that it shares a similar vulnerability with having the same codebase running in local hardware.

And this isn't even solved by having standard protocols, open source implementations as you mentioned -- only mitigated. OpenSSL was exploited too.

It still raises the bar for the attacker. Now they explicitly need to seek out and target the company. Even if breaking into any single company might now be easier, breaking into all of them is much more difficult than if they used a single point of failure like Okta.

While it feels that way, in my experience, it is very easy to break into 10 poorly secured systems compared to just 1 properly secured and monitored one. Okta may have messed up here, and there's no doubt they're a large target, but they're still a much harder target to breach compared to anything 99% of companies will be able to set up (unless you're a Fortune 50 company perhaps, and even then - it may not be cost effective even if you can make it work). Expect to have 50+ engineers and other technical staff to be anywhere nearly as secure with a custom solution. A roll-your-own solution with a small crew supporting it can never compete.

The amount of effort they invested into breaching Okta is probably way more they'd invest in breaching any single company's SSO.

If "LAPSUS$" is after big companies with significant IP they can hold ransom for millions of dollars, it's most likely not even worth their time to find let alone attack your small company's on-prem SSO even if it was a single shell command away, so you'd still be more secure.

> If you are not, overreacting isn't going to help anyone. There's no need to start writing user login code tonight.

I agree. That being said, there's no need to write login code yourself. Trust on the existing, tested, proven OSS solutions and libraries available, if you decide to not rely on a provider like Okta.

"Trust on the existing, tested, proven OSS solutions and libraries available, if you decide to not rely on a provider like Okta. "

Can you recommend some?

Yes. Keycloak, for one: https://www.keycloak.org

I've also heard good things about Hydra: https://www.ory.sh/hydra/

Depending on your needs, an SSO proxy like Vouch works well for internal services: https://github.com/vouch/vouch-proxy

There's also an abundance of libraries for different programming languages to implement authentication or authorisation, lots of them battle-tested by thousands of services. It's not like providing secure services is impossible without Okta.

Yes but I'm having trouble with the client side. Especially Cordova or Capacitor. In a hybrid app with keycloak in the back it's just not working. I tried the cordova-native keycloak adapter but it's just not working. Pure Vue or Angular no issues. Flutter issues because AppAuth is not very well documented and it's even worse for Android Java or Kotlin. Security or auth should be treated as the president of the 1st class citizens, instead it's a mess. On the backend everything is fine, but frontend is in a really bad state.

what about having an extra in house level of security? Something that does not require the overhead and maintenance of Identity Management System? Not sure what that would look like though.

I wouldn't throw out the baby with the bathwater. I don't know how publicly it's acknowledged, but most people I know in security have always had a poor view of Okta's security.

On the other hand, I feel fine using GSuite for SSO because we have a much better view of their security.

That said, it sucks that everyone is so fucking bad at security. I maintain that it is not that hard.

> but most people I know in security have always had a poor view of Okta's security.

What are their usual criticisms? Genuinely curious. I've always hated their UI, their docs, and lack of real support but felt that at least they had security going for them. The last assumption might have been proven wrong tonight.

Also, what would be a reasonable alternative to Okta now that their nearest competitor, Auth0, was acquired by them. Is GSuite a reasonable alternative for SSO with multiple different providers and supporting SAML, etc.? Thanks in advance.

A problem in infosec is information dissemination. Lots of their criticisms are based on backchannel information shared over drinks. It makes it difficult to discuss publicly.

I wouldn't want to make a recommendation, but I'll say that my company uses GSuite.

A typical opsec issue in many companies even to this day is doing ssh shell access to critical systems and doing so with ldap passwords or user-controlled authorised keys and without 2FA and without host key verification. There are trivial browser based local network attacks to gain reverse shell access with such setups. Just because those systems are behind firewall and VPN, people tend to falsely believe they are more secure.

> or user-controlled authorised keys and without 2FA and without host key verification. There are trivial browser based local network attacks to gain reverse shell access with such setups.

I'm struggling to see how that attack works without a copy of the private key.

Any pointers for what to search for to understand this better?

The key thing here is _user controlled_ private keys: this means that those can leak, and if they leak, you're in trouble.

A better solution is to use ephemeral SSH keys generated using an SSH CA. This kind of thing can be implemented with Hashicorp Vault, though I'm sure there are plenty of other solutions out there.

It simplifies key checking on servers too, as they just need to the details of the signer and there's no need to juggle keys in LDAP.

You'll still need a way to deliver those ephemeral ssh keys to a user controlled location, or make them indirectly usable by users (which amounts to the same thing).

In my company, we have a wrapper around `ssh` that handles that. That's the easy bit.

Jumpcloud seems to work quite nicely.

Security IS hard. Every single day there’s a zero day practically released.

Add in systems that were built for X and now doing Y. This is hard to get right. A single slip up will lead to you being compromised.

> Every single day there’s a zero day practically released

Yeah, people need to stop using memory unsafe languages. They choose not to.

> A single slip up will lead to you being compromised.

Not if you add multiple layers of security. Like sandboxing. Or mTLS. It's not hard to do that.

edit: Let me clarify. Security isn't hard generally, but it's hard individually, because you're drowning under everyone else making it artificially 10000x harder.

Log4j was memory safe. Tho I agree on mTLS and even would like most valuable networks to be connected only via an allow list of safe-ish destinations. It would make things a lot harder, and the logs of denied hosts would also be a nice warning.

Yep, after memory safety there's still work to be done. But it'll be a lot less work.

Actually, the fact that memory safety bugs are more difficult to exploit seems to have increased the rate of vulnerabilities discovered and exploited, and the fact that these are now often higher-level bugs (think insecure feature design bugs rather than low level implementation bugs) - means that once something is discovered, it can often be exploited in a way that is either much more pervasive or far harder to detect. So no - safer languages won't stop security from being an issue. Secure design, implementation, configuration, and frequent red-teaming exercises are the only way to reduce your risk, and even then - expect to reduce the rate by some %, but never reach zero.

Converting the entire stack to memory-safe languages to save work is definitely in the “easier said than done” bracket.

Just curious, what about the exploits targeting say Java VM?

> Yeah, people need to stop using memory unsafe languages. They choose not to.

Golang and Rust are not magic bullets that makes systems automatically secure.

Flaws in application logic have little to do with language choice.

Also consider the effort and money it takes to rewrite a multi million LOC system with several dependent apps. The new trendy languages introduce breaking changes, switch paradigms, and have less mature ecosystems.

I lost faith in Okta when we implemented OAG. They don't have an AWS marketplace version so if you want to run it there you have to manually convert their ESX appliance. Once it's up and running you can't login and get a shell. (Well, you can using Systems Manager on AWS...) and installing any agents on the box means support won't help you with anything.

Then their SNMP mib doesn't work properly. So you have a a box that is proxying some of your most critical systems that are so old that to integrate them with MFA requires OAG (think mainframes, old ERP systems etc) and you have to take Okta's word for it that the server is secure and not hacked.

They do thankfully support Syslog for logs, but again you have to take their word for it that you're getting all the logs, because you can't access the system to verify.

Having said all that. OAG solves a very real business problem and it is hard to find a competitor in the market with the level of integration it has to legacy platforms.

Interesting that you mention GSuite/Google.

I don't remember Google having any large-scale security incidens.

Have they been better at playing down their security incidents or are they doing something very right that the rest of the industry can learn from?

> are they doing something very right


> that the rest of the industry can learn from


It's simple: they pay absurd amounts of money for top talent and let them work. Look: https://www.levels.fyi/company/Google/salaries/Software-Engi... can your company afford to pay 2-300K cash -- not to mention serious stock -- to bread and butter mid level engineers?

Yeah that's a fair point. As much as I badmouth Google in other areas like product longevity, I don't usually laugh at their security related offerings and initiatives.

Google security is very well regarded in the industry for good reason.

Aurora was the major incident. Google has invested heavily in a number of areas with regards to security, such as BeyondCorp, but I haven't talked to anyone from Google sec in a few years so I don't know how things have changed.

A poster further up mentioned something that I think is often ignored.

Security isn't just about overflows and injections, outsider enemies vs insider allies. Any human with privileged access that can be compromised will eventually be compromised as the perceived value of his privilege increases beyond the cost of compromising him. Logging, distribution of privileges, and other such solutions aren't really solutions so much as just a sort of cat and mouse game.

I would claim that impenetrable security is not only hard, but ultimately impossible.

This comment won't age well.

You mean you expect GSuite to fall at some point? Could be, I don't have good insight into Google's internal security posture these days.

+1 to this. Why outsource the most business-critical parts like auth and email to a third party, knowing that they're the biggest target? I know that the most common argument is "they know $service more than we do", but don't really have a counter argument against it.

Because in both auth and email one can make a small screw-up that allows for a company threatening breach. As someone who has performed penetration tests for all of: companies that roll their own SSO, companies that use a third party SSO service, companies that provide third party SSO services, the failure modes for contracting out to a third party are a bit easier to manage if you are a small company, and chances are if your SSO provider is breached they are going to care about bigger fish than you.

For medium to large enterprises, the calculus is harder and there's going to be months of flame war trying to hash that out.

Its expensive and keeping the expertise around and maintaining it was seen as not worth the cost.

The problem is motivation to break it trends up over time as the service gets more and more popular.

In some ways, if Okta can fall basically anyone can. Sometimes there's an advantage in being smaller. One would hope that the actual attack surface on Okta was smaller than the post seems to imply. That there is even any way to get full access to all customers would seem to be a critical failure.

> In some ways, if Okta can fall basically anyone can

This might be true, but for someone to "fall" they must be targeted to begin with.

If "LAPSUS$" is after big companies with significant IP which they can hold ransom for millions, as a small business you'll be safe because they won't even bother attacking you, and may not be willing to put the same amount of effort into breaching your small company's SSO than they would for Okta.

So hire someone who does, or consult an outside company to do the work for you. You can get the expertise without necessarily having it in-house. There are agencies both big and small that can help you, including training your existing staff.

Yes, it's going to cost more upfront, but there are a wide range of options between "rely on a blackbox PaaS/SaaS" and "write and deploy everything yourself".

consult an outside company to do the work for you

Maybe a company that specializes in this kind of thing? Like Okta?

I meant specifically to do an integration for you on your infrastructure, not to use a centralized PaaS/SaaS like Okta. I'm not in a position to say that Okta never does this for anyone, like key customers, but it is not part of their standard offering.

But you said you want a company with expertise, who has more expertise at building and running an SSO service than a company that's selling that service?

How do you have an SSO system without a centralized service? Whether you buy it as an outsourced SaaS service or run it yourself, it's still SaaS, but now you're on the hook for managing it and keeping it secure.

The parallel to conventional software infrastructure would be: Hire staff, or contract local expertise from reputable businesses, to build and maintain your software (probably using existing technologies rather than writing everything from scratch) on bare-metal or colo, rather than reaching for fully managed PaaS for your entire stack.

Or for comms, run a team- or company-wide chat server rather than relying on Slack/Teams/Discord.

Either you're missing my point (and maybe I didn't express myself super clearly), or you're pitching a false dichotomy.

> who has more expertise at building and running an SSO service than a company that's selling that service

OP illustrates well the consequence of that line of reasoning. I'm certain this is just the beginning as more too-big-to-fail vendors get compromised.

The cloud is just somebody elses' computer, "Move fast and break things" mean things get broken, etc.

Whenever we get subscription services for "smart security" involving cloud-connected cameras, AI-controlled locks and drones, would you also consider that as the one-size-fits all over having security guards or contracting companies like Securitas or your local equivalent?

> now you're on the hook for managing it and keeping it secure

You always were. There is no free lunch.

> who has more expertise at building and running an SSO service than a company that's selling that service?

If Okta gets away with this (and the Equifax breach and subsequent outcome suggests they will) it proves that the market doesn't actually care about security, thus a company selling that service has little incentive to actually invest in security.

Because auth is super hard? Have you tried implementing SSO for arbitrary OAuth2 providers?

Why would I ever need to support arbitrary Oauth2 providers? As long as they're following the standards, a generic Oauth2 client should suffice. If they're misaligned with the standards I wouldn't be using their service.

I can tell you never have, because there's no such thing as a generic OAuth2 implementation. They're all slightly different and incompatible. SAML is what you really want in terms of making things works more generically, but even that is generally a nightmare as you still need mappings.

I can tell you don't know about OpenID Connect. ;)

No, but there are both open source libraries, and complete authorisation servers. If you trust Okta, you can trust these, and embed them in your infrastructure. Auth only gets super hard if you try to be smart and deviate from best practices.

I agree with you on not outsourcing authentication etc to a big provider like Okta, but OAuth2/SSO can get ridiculously annoying in large organization/s. If it's one app that you're dealing with, no worries. If it's multiple operating systems, multiple legacy and new apps etc, its a headache. Not that Okta particularly helps with that either though, it's a problem full stop.

Doing user IP address change detections as part of a heuristic vector for account compromise is sometimes a best practice and sometimes a great way to generate a bunch of useless noise, but I know which is less likely to break UX by integrating such a feature.

Can you link me to these open source libraries that will handle arbitrary oauth2 providers?

As I wrote in a sibling comment:

Keycloak, for one: https://www.keycloak.org

I've also heard good things about Hydra: https://www.ory.sh/hydra/

Depending on your needs, an SSO proxy like Vouch works well for internal services: https://github.com/vouch/vouch-proxy

There's also an abundance of libraries for different programming languages to implement authentication or authorisation, lots of them battle-tested by thousands of services. It's not like providing secure services is impossible without Okta.

This seems to be one of the more popular ones: https://www.keycloak.org/

Thanks, I'l check keycloak out.

This is ridiculously reductive.

Could you expand a little? Of course a HN comment is not giving you full instructions on implementing custom authentication infrastructure, but that doesn't mean it's impossible or somehow "wrong" to do so, if you rely on established libraries and patterns.

OAuth2 is a really complicated standard and is complicated to get correct. I mentally compare it with SIP which also has several open-source implementations but still often ends up getting outsourced to others due to the sheer complexity of the thing. Though in OAuth2's case I'm willing to say it's complicated because security is complicated. With SIP I really don't know why lol.

Can't you say the same thing about a lot of stuff? Why outsource hosting to a third party like AWS, knowing they're the biggest target? I trust AWS a lot more than myself when it comes to securing infrastructure.

That's a good one! :thumbsup:

> something like Okta is an infinitely more attractive target than we are

Agreed. Centralization of attack surface is a major risk.

I understand the allure and have been in situations where even knowing the risks, the path is forced upon us because "nobody is fired for buying IBM".

But yes, a centralized service holding all the marbles is an infinitely more attractive target than lots of disjoint little systems which aren't individually very interesting. The centralized system may be better run (arguable, but at least they have the resources and it is their primary job) but they also face a much higher attack concentration.

A compromise solution is to have both. I've been in one company where okta was used as the primary SSO and for most internal systems the only one. But for the very sensitive resources, one had to be authenticated via okta but also via a homegrown MFA system.

Apparently Plaid has 40% penetration of US bank account holders[1], and of course their entire business is storing the username and passwords for all those accounts so they can screen scrape. I struggle to think of a more attractive target than that!

[1] https://www.thediff.co/p/plaid-data-layer-to-payments-layer

So unfortunate banks haven't implemented SSO and fine grain access control after all the years.

Ain’t that idea of the Open Banking standard?

Good point. Updated to "implemented".

That's part of the equation.

In a B2B setting you get rid of a LOT of paperwork and due diligence by just saying "We use Okta for X, Y, Z." Hell for SOC2, Okta got rid of a shit ton of verifications for us at a critical stage of the startup I was leading, Papa.

Sure you can build your own but at that point you're wasting months for zero customer benefit. You could be building product. I don't know what the right answer is.

> In a B2B setting you get rid of a LOT of paperwork and due diligence by just saying "We use Okta for X, Y, Z."

No, not just that.

There is a lot of conversation here about building your own vs using something 'centralized'. All software comes with the “build vs buy” tradeoff.

Commercial open source companies (COSS) minimise those tradeoffs and give you the best of both worlds. That is - The power of build and the speed of buy.

You can migrate to and from a completely self hosted solution to a managed service as your needs change (for many COSS alternatives)

Ah. Yes. The old 'security through obscurity' protection.

It doesn't work.

No, that's distinctively different. This is about distributing attack surface, not obscuring authentication infrastructure. Are we really at a point where not relying on external account providers is considered unusual, or a bad practice? I'm not buying it.

> something like Okta is an infinitely more attractive target than we are

In my book that counts as "obscurity". "We are not an attractive target, hence we won't be targeted". You will be targeted, as long as your defence is weak. Only if you manage to set up a comparable defence to a large provider, this argument flies: because all things equal, attackers will attack the more attractive party. But all things are not equal. (Which I'm not certain about, it may very well be that Okta has a really poor defence in place, in which case my argument falls apart, because you'd be able to "make all things equal" much easier. I doubt this though).

> Are we really at a point where not relying on external account providers is considered unusual, or a bad practice?

No. I consider it reasonable to "build your own", but only in certain cases. Things like auth are not your core domain (unless you are an auth-provider, obviously), so all effort and time you spend on building the umpteenth login/auth-flow is not spent on building the stuff that sets you apart. Even if you drag in a standard library and only spend an afternoon: you're still maintaining it, testing it, etc. And with external parties, generally, you'll be following market best practices: fingerprint-login becomes standard? You'll get it almost for free, whereas in the diy-case you'll be designing, testing, building such progressing tech for ever. The economics for building are just wrong.

As are the security: as pointed out above: I highly doubt you'll be able to match a level of defence a large, focused, experienced party can obtain.

But there may very well be cases in which building your own login/auth makes sense and the tradeoffs in economics and security (and features) make sense. Maybe it has to integrate into some legacy; maybe (legal) requirements enforce you to keep it all in-house, etc.

> In my book that counts as "obscurity". "We are not an attractive target, hence we won't be targeted". You will be targeted, as long as your defence is weak.

I find this point to require more subtlety; "we are not as much of an attractive target, so we likely won't be targeted by a sophisticated, human adversary" is more apt for the situation. The attack vector is just not comparable to Okta's scale, which is true for the majority of companies and platforms out there. That doesn't mean security should be taken lightly, just that the magnitude and frequency of bad things happening will be lower.

> Things like auth are not your core domain

That's a good point, but it depends on the situation. I wouldn't recommend spending lots of time on this yourself, but there are standard libraries or OSS solutions available. If you're optimising for reliability, it might be beneficial to keep auth within your own systems. If you're optimising for privacy under GDPR, for example, it might be beneficial to not trust your users' data to an external provider. And so on.

> I wouldn't recommend spending lots of time on this yourself,

Unfortunately, this is in direct opposition to

> That doesn't mean security should be taken lightly

I agree about the subleties that, indeed, make the tradeoff more nuanced. You are less likely to be targeted. But, given that you "wouldn't recommend spending lots of time on this yourself" (my recommandation too), the trade-off balances dangerously into the "attractive target" too.

> but there are standard libraries or OSS solutions available.

Which, unfortunately is hardly different from "a centralized target". Look at, for example, WordPress. I used to have a WordPress hosting company: the constant battering on the entry-doors, by bots, and humans is real. The amount of successful breaches are staggering. I don't host anything WordPress anymore, yet IPs that haven't ever seen WP are hammered by bots trying to breach WordPress admin, XMLRPC, Woocommerce and so on.

By your reasoning, none of the WP sites we hosted back then were "attractive". By that reasoning, the servers I run now, without WP, are even less attractive. Yet they are battered by attackers. 24/7. And breached (not my current servers, though) quite often. Small websites for coaches, mom-pop-shops, the garage around the corner, your local football fanclub, and so on. Breached. Because they were an attractive target, not by being centralized, but because they weren't. Exactly because they use (OSS) libraries, combined with very little resources for defence, they are attractive.

Okta's CEO just released a statement. And it's very tiny. Unsure what to make of this. Waiting to see LAPSUS$ response

> In late January 2022, Okta detected an attempt to compromise the account of a third party customer support engineer working for one of our subprocessors. The matter was investigated and contained by the subprocessor. (1 of 2)

> We believe the screenshots shared online are connected to this January event. Based on our investigation to date, there is no evidence of ongoing malicious activity beyond the activity detected in January. (2 of 2)


It's always amusing how companies redefine their subcontractor relationships depending on the situation. The same support engineer would be presented as:

1. when marketing to customers - "our support engineer"

2. when reporting on an incident - "a third party customer support engineer working for one of our subprocessors"

Your entire service is based on the security of your product. You owe everyone more of an explanation. The Lapsus telegram clearly shows they are using an internal Okta Admin tool to reset/disable passwords and 2FA. I'm more inclined to believe the evidence in front of my eyes than your statement.

It's clear some service team at Microsoft used Okta to SSO, or a contractor that did that's why they only got 37GB of code and Bing/Cortona not Windows OS or internal tools.

The group probably enumerated as much access from Okta as possible and when they had every juicy target decided to release it all for the lols. They probably have more in the works too.

I'd suggest everyone today revoke any Okta SSO sessions for your apps and force a new session as a precautionary measure.

> It's clear some service team at Microsoft used Okta to SSO, or a contractor that did that's why they only got 37GB of code and Bing/Cortona not Windows OS or internal tools.

Did i miss some news or? I know Okta and Microsoft was compromised, but is there anything which shows it was related except for the hacker group and timing?

No, just the timing and group. Pure speculation

There is cloudflare PII in one of the screenshots, I wonder if Cloudflare was notified of access to their data in January.

Speculation from here on.

In my personal opinion,Cloudflare's actions indicate that Cloudflare was not notified of the breach until today.

```We are aware that @Okta may have been compromised. There is no evidence that Cloudflare has been compromised. Okta is merely an identity provider for Cloudflare. Thankfully, we have multiple layers of security beyond Okta, and would never consider them to be a standalone option.``` - @eastdakota - https://twitter.com/eastdakota/status/1506143353544478724

‘“Merely” an identity provider for [us]’ is selling the understatement of the year

Identity + AuthN != AuthZ

That was a pretty rapid response from CF though, are we sure they didn't know ahead of today? How long did they have to determine "no evidence" before making a public statement about it?

Any competent operation is continuously monitoring all available signals for signs of breach. All I read into this is that their systems have not identified any IoCs. Doesn't mean it hasn't happened, but, if you're relying on something non-automated to make these kinds of determinations, you're already pretty screwed. Forensics is definitely a thing in cases where there's reason to believe a breach happened, but, it's not the thing that will be used to decide something has happened worth investigating.

Thus, it should take approximately zero actual time to conclude what was stated here.

Touche... CF SIRT is an a well oiled machine [1]

[1] https://blog.cloudflare.com/cloudflare-investigation-of-the-...

So, they covered it up in January and now wants us to believe that this is not yet another attempt at covering it up.

Up until that statement I was willing to believe that they only just found out about it.

Had the same thought. A security breach in January, and only now are the details of the breach are revealed (by someone not related to Okta). In other words, Okta would never have revealed the security lapse until this leak.

One can only imagine the many security breaches that are never revealed (from any company).

They were kind of stuck with admitting it was active in January since most of the screenshots have a late January date showing in the bottom right hand corner.

Another possibility could be they didn't know about it in January and instead only figured it out sometime last week; at which point Todd Mckinnon sold 7600 shares of stock.

This might be correct, although looks not great that they're only talking about it after the leaked screenshots.

However, if LAPSUS have more, this could easily lead to them wanting to prove him wrong and release more proof of compromise. If that happens his credibility will be toast.

Oh, this absolutely looks like "an attempt". I'm sure investors are calmed by this tweet. Nothing to see here, move along.

Pre-market trading is already -9% (was -14.5% before that tweet).

"third party customer support engineer working for one of our subprocessors."

Holy red flags Batman. How can you be a serious security company while outsourcing such critical components?

You do know this means they have broken GDPR, by not reporting a noticed breach.

They only have to report if there is significant risk to the individuals involved. If there are few individuals and the event was indeed contained, they have no reporting requirement.

Doesn't that depends on how well they separate access within the company? If they know that one of their support reps in Costa Rica was "turned", they might not have to declare anything in the EU if they can prove that the rep has zero access to EU customers.

Edit: to be clear, I don't know, it's a genuine question. Does "any breach" count?

It has to be a breach where EU citizens' PII was potentially accessed.

Notification is required only if their impact assessment concludes on a significant risk to users (which they must be able to produce to the data protection authority at any time).

Otherwise, business as usual.

Okta is a company in the United States of America, by law companies under GDPR are already in violation when transferring data to Okta and would have needed to report themselves long before the breach.

Interesting point. Speculation that rootkitting all of your employees made the problem worse:


Honestly: fuck corporate security and leadership who kept pushing for spying on their workers with increasingly intrusive tools. I can’t remember how many times I argued with IT that MITMing SSL was not ok (there are some tools.. crowdstrike maybe? That do this). They made their bed, now they gotta sleep in it.

MITMing SSL is also Zscaler's MO, and guess what my job is preparing to mandate.

Organizational endpoint security seems to be much more about incident response than incident prevention. That is, it's more focused on providing a nice audit trail that can be used to find and fire/arrest the perp once a breach occurs than it is at preventing breaches from happening. It probably fills a due-diligence tickbox somewhere, allowing the company to say "if a breach occurs (the probability of which is always nonzero) then we have measures in place to find out how, where, and by whom and mitigate the damage" to shareholders and important customers.

It is not an audit tickbox. It is barriers on the right side of the bowtie, and in the RECOVER dimension of NIST. Here you go: https://zinatullin.files.wordpress.com/2020/12/risk-bow-tie....

I work in corporate security. It's a great gig! MITMing SSL is something I refuse to implement, and argue against in every forum I attend.

It's always interesting to hear people think they have rights on corporate networks.

It is a complex issue with entry of nuance I'm too tired to thumb type. I know there are lines that shouldn't be crossed. But it is so naive to think companies shouldn't have an eye on their data.

Do you think banks shouldn't have cameras in their lobbies? Or that tellers can walk in and out of the vault with black bags and no one ask what's inside?

I don't think the OP's point was about rights as an individual on a corporate network, more about trying to backdoor your staff's equipment and the consequences of that.

To use your camera analogy, you better have a good lock on the security room holding the tapes.

Yes, putting every device in Windows AD and installing remote management tools that run at the system level is a recipe for total compromise.

Software diversity is good. Remotely controlled mono-cultures are bad. IT management and security compliance people need to understand this.

I liken the mindset of (only we will use the remote control software to do good things) to Encryption back doors (only law enforcement will use them to catch criminals). Computer scientists call these 'Exceptional Access Systems' and it has been shown (many times) that it is impossible to ensure they will not be abused and used against you.

Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications:


> Software diversity is good

Software diversity also means attack surface multiplication, so it's a delicate tradeoff.

> Yes, putting every device in Windows AD and installing remote management tools that run at the system level is a recipe for total compromise.

What alternatives are there? I honestly don't know

It's not really about alternatives, what would the spyware stop that a simple Daemon that version checks your device won't?

If you're working from home and your device gets owned, and the attacker just sits there slowly slurping data as you go about your business... Chances are you're not going to detect it, but if you did you'd probably notice it's running when it's not supposed to and it's sending network traffic when not actively used.

If someone uses a laptop breach to go look at data you wouldn't normally look at ( or tons of records ) an audit log would give that away.

So really to do security well you have to profile the role. Why are you looking up customer record abc123? Trying to see what your girlfriends doing, or is it because they've called the customer support line?

You basically have to define what does misuse look like and protect for that.

Back on the slow slurp, hardend immutable devices that are regularly updated with limited Auth sessions would likely contain the breach.

Same deal on servers, you harder to stop it, but you monitor for when it happens.

Do people really monitor though? Some, sure, but your average org buys a tool the fails configure it with any sort of context.

In security engineering there's this subculture of individuals that just implicitly trust every vendor that offers a solution because that means they don't have to do their own research on the issue. And if anything goes wrong, just blame the vendor!

I have talked to security engineers that have had the company procure millions of dollars worth of security tools, and yet they don't even know the basics of security.

For God's sake, I had to explain what end-to-end-encryption was to a team of "senior" security engineers the other day. They genuinely had no concept of it. As a security SWE I am so done with these sort of people. Fuck them! Why are they even in security!? Real private data is at risk because of their incompetency!

It is rather appalling. I recently listened to a podcast about a police station in a small city that was compromised and all their data ransomed. They had a contract company that handled all their cyber security needs. The "security" company had an RDP port open to the world to an admin account on the server that ran everything for the police station and city. In order to protect the employees webmail, they required that you login from an internal IP and so they gave this admin remote desktop account credentials to employees that needed to check their email remotely. They could login to the server with the admin account and then use the browser on the server to log in to their webmail. The same set of root admin credentials was given to every employee. It's just insanity. And I'm sure that government contractor was hella expensive.

In case others are curious, that (probably) was Episode 96[0] of the "Darknet Diaries" podcast. Very worth a listen IMO.

[0] https://darknetdiaries.com/episode/96/

From friends and former colleagues I know at least quite a few companies using tools like Tanium [0]. The Tanium client on the employees' machines (and on all corporate servers, containers, VMs) is still based on python 2.7.

It would be interesting to know Tanium takes to backport security fixes. I mean py2.7 was sunset 2020-01-01 [1].

[0] https://www.tanium.com/ [1] https://www.python.org/doc/sunset-python-2/

Many of them the extent of their knowledge is: push a button and run a scan and tell an App or Infra Team: here are your vulnerabilities now fix them. Ask them to dive in a little deeper with you on the vulnerability (Eg past the entry in the spreadsheet they gave you) and they often don’t have the technical ability to do so..

I have built solutions that are a perfect fit to solve security issues at multiple companies, only to have them turned down because “management says we have to buy a vendor for this”. Incredibly frustrating, especially when they go with a vendor that is in all ways worse than the solution I was providing.

I worked at a company where the CIO believed everything vendors told him, and nothing any of his employees told him. He also got lots of kickbacks from said vendors and was eventually perp-walked out of the company.

Same company had a head of security who was full-time employed by two different companies at the same time (he had unlimited vacation time for both) without either knowing the other existed.

Best thing was that the company did investment/insurance services and had a banking division.

> I have talked to security engineers that have had the company procure millions of dollars worth of security tools, and yet they don't even know the basics of security.

When I was in college the whole student network went down. Apparently there was a hack from the student network to the campus police network, so campus IT pulled the plug on any of us having network access. Because those two weren't separated because why would they be?

So I hoofed it to the IT department and I asked to talk to someone who knows a thing or two about networking. The director of IT came out. He sat me down and started telling me about the $300,000 budget he just got approval on. I was thinking, you can't take some of that and stand up a router with packet filtering to isolate the student network from the police network in two different subnets? A FreeBSD box might do in a pinch. (It was the early 2000s and network traffic loads were... different.)

I learned a lesson that day about seniority and the Peter Principle. Climb high enough and your mindset changes, from technical solutions to issues of money and resource allocation. From there, "let's just pay $VENDOR to do it, they'll solve the problem for us" is but a small step.

There's a mix of "The Peter Principle", lack of experienced engineers, too expensive to acquire engineers, at play in the IT world. But as far as I recall that was always the case.

I'm not sure that "security engineer" would be an accurate term to describe such a person.

Bill Demirkapi posted an update later that this earlier assumption was perhaps incorrect [0]. It seems an internet-routable RDP server at rdp.chrisro.fun ("chrisro" is apparently named after the specific Okta employee/contractor whose name is visible in screenshots) is a more likely vector for compromise.

0: https://twitter.com/BillDemirkapi/status/1506123471352438784

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact