Maybe I’m just an unimpressed security professional but I’ve still not seen evidence I’d call a breach. At least not a significant one if you want to argue sublantics.
Workers at organizations get compromised all the time. This doesn’t mean their systems/products are compromised.
I do security (albeit not CISO or compliance-style, but commercial anticheat), and in my opinion, if a support agent's account was used by a third party to view anything about my account without permission - any undisclosed email address or name, their system was compromised and it is a data breach.
IMO, support agents also should not have the ability to view or access a customer's account without some form of time limited, auto-resetting-to-opted-out default confirmation that support can view the account from an existing logged in admin.
Yeah, the screenshots they admit are real clearly show Slack, JIRA and AWS being open. What did the attackers see there? Were the customers whose data was viewed notified? How can Okta tell if that data is sensitive or not without taking to their customers?
A competent security response to this would have been "Yes, they compromised one of our support technicians. We've initiated an audit and are sending out e-mails containing all of the actions that support representative performed for each customer to that customer's administrator"
If through compromising those workers outside parties gain access to sensitive systems, and that situation is not promptly detected and corrected, then the system _is_ compromised.
Okta is not just a bunch of software, it's also staff and processes, and the result is a trusted service they provide to customers. If that service is compromised, it doesn't really seem to matter how?
> If that service is compromised, it doesn't really seem to matter how?
I hear what you're saying, but the how does really matter, and will change how customers perceive the issue and make decisions about how to react.
e.g. "databases were open to the Internet and all data has been siphoned" lands quite differently than "a staff member abused their privileges but the scope of abuse was limited to xyz".
If I'm a customer, it tells me a lot about what Okta needs to do next, and how much I should freak out right now. It's still extremely problematic that a staff member (1st or 3rd party) could abuse such privileges, and I immediately have questions about how those privileges were abused and to what actual effect, but it's a fundamentally different problem than other types of breaches.
How it happened doesn't change the fact that they have been breached.
If I was a bank and claimed that I haven't been robbed, an insider just transferred billions of pounds out of the bank and then fled, I think everyone would rightly say "What are you talking about, you have been robbed!"
It doesn't matter if it was done by a guy in a black and white stripey t-shirt, or if it was done by a rogue internal employee, a bank robbery is a bank robbery.
In fact, the ability of an internal staff member to transfer lots of money out of the bank probably signifies a more significant and systemic issue - particularly if i've lost my money and the bank refuses to acknowledge they have been breached/robbed (it was just an internal rogue staff member, not a robbery! our security hasn't been breached!).
A bit of a stretched analogy - but i'm sure everyone gets the point. Security isn't just about technical security - it's the whole process involved in making sure these things don't happen. A banks 'technical' security might be great, but the bank would still be considered horribly insecure if a staff member can transfer any money out of an account. Equally an auth service might be 'technically' secure, but the ability of a single rogue staff member to impose a lot of damage suggests more systemic issues.
> It doesn't matter if it was done by a guy in a black and white stripey t-shirt, or if it was done by a rogue internal employee, a bank robbery is a bank robbery.
I have to respectfully disagree.
Yes, the end result may be the same, but even in a bank robbery, the how matters, and will drive different behaviors from everyone involved: the bank, law enforcement, and customers of that bank.
If as a customer, I learn that a guy in a stripey t-shirt holds a teller at gunpoint, my conclusion goes something like "that's a terrifying situation for the teller, and I hope they're ok". I'm probably not going to stop using that bank.
If on the other hand, I learn that there are systemic issues with bank security, and internal employees have been embezzling funds somehow, I'm probably going to think hard about whether this is a bank I want to do business with.
> Security isn't just about technical security - it's the whole process involved in making sure these things don't happen.
Yes, and when factors are involved that are out of the bank's control (e.g. a crazy person walks in with a gun), it might be fair to ask why the guy got inside to begin with, but the conclusions you draw about such an incident are far different than the conclusions you'd draw if internal employees were involved.
In case this wasn't clear from my earlier comment, I didn't mean to imply that an internal process issue makes any of this ok. But it does make it different than other types of breaches.
Bottom line: the how still matters, not because one type of problem is ok and the other isn't, but because the actions a customer should take / consider will be different depending on how the breach happened.
Yeah, I think we're on the same page. The primary focus of my original reply was that the "how" really does matter. I agree that this is still a breach regardless.
If you follow the lapsus telegram, you will see they are claiming they got AWS API keys from the corporate slack. That might be more dangerous than accessing the support console
I can see a Slack breach being far more damaging than policy should effectively permit it to be because plenty of people use it to share things they technically should not.
Without proper separation of duties to limit blast radius, it's just as damaging as a software vulnerability. It sounds like that's the real issue here: Compromise of a support engineer lead to far more access than should have been permissible.
The screenshots on the linked tweet make it look like okta dog foods their own product for access to various services and someone has access to one of their admin accounts. Which is bad, but that could mean “we phished this one person who works at okta” and not “we compromised okta and have unfettered access to their customers/valuable assets”.
The news of the coming days may well prove me wrong, but i am not assuming the worst from this yet. Many companies whether or not they use an idaas do things like login anomalie detecting, and users coming in from weird locations and weird times of day would be sure to set of alarm bells at some of the big targets. Heck, AWS does it for customers with guard duty.
The breached account shown in screenshot belong to a user at a 3rd party outsourcing firm providing support services for Okta. So he is technically not an Okta employee.
It seems strange that such a user would have wide access. It could be that his account was just used to gain further access, or it could be that his account had wide access by mistake. Or the user doesn't actually have that wide access.
There are talks about superuser access. But is that referring to the user's actual privileges or the fact that he has access to the tool called "superuser" shown in the screenshots?
I knew about AWS Backup but I previously never saw an option to enable it for S3. It looks like it is in limited (1TB) preview in the Oregon region, and doesn't support backing up buckets encrypted with client-provided keys.
That said, AWS Backup is the answer to ransomware woes and it can't GA soon enough (3P solutions like Rubrik, Druva notwithstanding)
Implicit grant is depecrated, in the forthcoming OAuth 2.1 [1] standard this is solidified.
We start using the language "public client" and "private client", where a public client is an OAuth client like a mobile app or SPA that does not have a client secret, but has an access token delegated to it(+optional refresh token). Public clients must use implicit+PKCE.
Private clients are what we would have previously thought of as an Authorization code grant client where a server process has an access token to take actions on behalf of a user.
Depending on the OAuth use case, maintainers of the system may need to keep track of what clients are public or private, and limit their entitlements accordingly.
Public clients have the obvious issue that they're on an end-user device and thus the tokens may be stolen, proposed standards like JWT DPOP [2] and token binding [3] aim to address this.
I work in an huge enterprise. We have incredibly customized software and stacks that have not changed much for 30 years, because they did not need to.
Now the people who wrote those stacks and who understand them are retiring/quitting. Kids coming out of school don't want to learn these systems, nor do people off the streets. You can only pay people to come out of retirement so many times to keep the plant running. This is above and beyond mainframes, and is intertwined deep in the code that powers every single application that runs the plant today.
We can't run off the shelf software on-prem, a huge level of customization is needed to bring it in.
We cannot pivot quickly to new things or support new languages.
We really struggle to add new features/releases and add new software to drive revenue. The IT overhead that just goes into keeping the plant running every day is astounding.
This is what I think of when I hear technical debt.
Going down this road did give us advantages for a long time, but now we're in an enormous crisis. It's not an insurmountable challenge, but I would be surprised if there aren't a lot of large companies who are brought down by their technical debt as faster moving competitors move around them. I certainly feel that unless we get our act together, we will be disrupted.
Code that has worked for 30 years is more technical debt than the ability to support 'new languages' or 'pivot' the code? Maybe I am an old fart, but the opposite seems right to me.
Large code bases are difficult to add functionality to.
A code base that is easy to pivot and switch languages sounds more like a nightmare. Isn't it more likely that your new, zeitgeist-capable software would torpedo development in far less than 30 years. As I understand "new programmers", the 'reinventing the wheel' speed is measured in a few years not decades now.
I think it represented poorly. It's more than just code in one system. It's systems built upon systems built upon systems. It encompasses our network, our software deployment stack, our proprietary extensions to standards and much much more. Unknown dependencies on unknown dependencies on unknown dependencies (and it's not like we're slacking on trying to map that/keep the asset inventory up to date).
It's basically paralyzing. It's so hard to get a release done, add capacity, or add new features for our lines of business (we have dozens!).
Job Role/Further details are risky to discuss because this forum is read by my colleagues and likely the nerdier execs. I could leave it at I am someone very senior and actively involved in trying to tackle our problem, so I see the efforts and the challenge first hand.
I can tell you that execs are very aware of the problem. Higher ups have spoken about it at townhalls, though they use softer language than I do. Since 2017 a lot of modernization attempts have been made(go cloud, use standards, use off the shelf software as much as possible), with very little to show for it so far.
Obviously it isn't all just tech that causes this, culture has a big part to do with it.
It feels like the scenario in the phoenix project almost, it'd be funny if it wasn't so serious.
Resonates with some big corp's I've worked with in the past; especially in industries where technology (rather than old business models) are becoming the primary channel of sales. Technology companies with technology management often disrupt the companies managed by the old way of thinking (e.g. finance, insurance, etc).
Some of the things you mention are red flags though at least to me having worked in them before - for me they normally make me question the companies management. The biggest one seen in a previous place I worked IMO - is buy vs build as much as possible off the shelf. How many successful tech first companies who have disrupted actually use that model for their core platform? The companies I've seen get away with it until they face competitive pressure OR technology isn't their primary advantage. In fact as you get to a certain size it can make sense to build your own and reduce your vendor count + ongoing costs and take advantage of your economies of scale. How many big tech firms rewrite db's, parts, dev-ops tools or are at least open to when the advantage is there? The successful tech first companies often do, even if they were once things like book stores where it "wasn't their core business". They even open source their components often to support their business and give their tech people more cred; allowing them to attract even more tech talent. The most successful/nice to work tech places usually err to building when it concerns their platform, with some pragmatism thrown in to use modern tooling from elsewhere if required normally open source but can be bought if it offers nothing of differentiation (e.g. cloud products, databases, etc).
Cloud is just a potential enabler IMO; you still need the culture to execute. A big corporation has a lot of interacting requirements, and needs the long term flexibility to change it without being on the hook in a vendor's backlog competing with other firms. It also potentially leaks your roadmap to other competitors. Common business software (e.g. document writing, email, chat, etc) are the exception; if your a big corp your usually in a monopoly/oligopoly position - there aren't too many people doing what you do at the scale you are and most vendor solutions are really just "outsourced builds"; where the long term flexibility as the vendor pivots/changes deteriorates.
Technical debt is the idea that just like you'd take a loan for your business to grow faster, you can take shortcuts while coding your first versions (for eg. not having adequate amount of test coverage, not making it modular or extendible etc.) to get the product out or meet a deadline. Just like your business loan, you get into technical debt knowing that you will eventually pay it back (i.e. write additional tests, refactor etc.).
Not every software problem is technical debt.
While your problem seems much larger, the same problem happens in smaller scales in every software company when engineers quit and leave a complicated codebase behind for new hires to take over. You should be looking specifically for people who have experience working with legacy code in your next hiring round.
I don't know that it has any actionable information for you (or anyone, really...), but what you describe sounds a lot like the situation in The Phoenix Project.
Prisma cloud (the cloud monitoring part) is not a great product. It lags pretty far behind cloud provider capabilities.
I also got the email that orca probably sent to everyone in their CRM about this, and while I didn’t need any reason to think less of prisma, I now associate Orca as a competitor and probably an earlier call than palo alto for cloud.
We are considering prisma cloud to monitor an on premise kubernetes deployment. Is there anything I should be concerned about or better options to consider?
The Kubernetes protection is derived from their Twistlock acquisition which is really good (just wish they’d get some SAST stuff in there). Not tied to RQL (but can be queried with it for some information)
I actually have never seen it's kubernetes security platform.
If it's using RQL for that I would take that as a redflag that it won't support much customization or logic that would allow you to tailor it to your organization.