Slack communicated the incident on the 31st of December. That day is typically used by politicians to bury bad news, as attention to news is quite low on that day. It might be coincidental, however, the little cynic voice in me is left wondering.
They detected the incident on the 29th of December. I kinda think that regardless of what day they disclosed, from that day to today, people would find various reasons to complain- "to soon to know what happened", "the day before New Years", "the day after New Years", and "too long after the event". In snark I will suggest that's a pessimist voice you're hearing- the cynic in me wonders if there's any time to disclose that won't receive a complaint.
Slack probably has a lot of large - and very demanding - enterprise clients who absolutely require their non-trivial service suppliers to have infosec policies with a maximum of 72-hour notification window for security breaches. And I can guarantee that no company wants to have client-specific notification window policies in place. That way lies road to madness, missed commitments and nasty litigation.
It's a good practice, in all honesty. The time window is for the initial notification, after that there will be subsequent notifications when actual investigations complete.
You may argue it's not common, but I'd expect a standard to be determined by an authoritative and fairly representative body. Not by HN forum or a private company in isolation.
Doesn’t really make me happier or unhappier, was merely making a point that RFCs come in different flavors, only one of which is a “standard” [1]. Not sure why it touched a nerve with so many people.
There are no standards here - the world of SEO is a weird place of guesswork and testing what works. I think the Google recommendation [1] is still to use rel=canonical however who knows why this noindex is added to alternative duplicate pages - it might be that the SEO person at Slack knows what they are doing.
I think the accusation of foul play is somewhat overblown given there is a version that does not contain noindex.
I understand that, but what I’m saying is that SEO itself is often based on what actually works. This noindex of non canonical resources is likely to be that not an attempt to hide things from Google.
If you have a localised blog for multiple regions you will probably have your main blog set up to be indexed and your regional blogs set up to be no-indexed so that Google isn't indexing the same stuff twice.
They likely published it as soon as they could and translations will be a couple of days behind. Other blog posts will be pre-planned and pre-translated. The noindex on non-us versions is probably also related to it not being fully translated. Check in a few days to see if I'm right
In a "few days", well we have the August 2022 precedent!
> In August 2022, Slack reset user passwords after accidentally exposing the password hashes in a separate incident. Unsurprisingly, that particular notice is also marked with a 'noindex' (both the U.S. and international versions).
Here all versions have the noindex. And it's 4 months after.
> BleepingComputer further observed that the "meta" tag containing the "noindex" attribute was itself placed towards the bottom within the page's HTML code, in an elongated line that overflows without breaking.
Placing meta tags at the bottom, out of sight ... an _interesting_ choice given the nature of the news.
Agree. While I'm pretty sure this is to bury the articles in search, that part is probably just technical reasons (or, even more credible, technical laziness).
I got one at 9:31pm Eastern Time last night (Jan 4, 2022):
here's what it said:
We wanted to make you aware that we are currently investigating a security incident, and that our investigation is ongoing. We will provide you updates about this incident, and our response, as they become available. At this point, we are confident that there are no unauthorized actors active in our systems; however, out of an abundance of caution, we want to ensure that all customers take certain preventative measures to protect your data as well.
Action request:
Out of an abundance of caution, we strongly recommend that all customers take the following actions:
- Immediately rotate any and all secrets stored in CircleCI. These may be stored in project environment variables or in contexts.
- We also recommend customers review internal logs for their systems for any unauthorized access starting from December 21, 2022 through today, January 4, 2023, or upon completion of your secrets rotation.
Additionally, if your project uses Project API tokens, we have invalidated those and you will need to replace them. You can find more information on how to do that in our documentation here.
We apologize for any disruption to your work. We take the security of our systems and our customers’ systems extremely seriously. While we are actively investigating this incident, we are committed to sharing more details with customers in the coming days.
Thank you for your urgent attention to rotating your secrets.
Yes, this is well known, they had a major breach earlier in 2022 that apparently leaked all of their Github tokens. This happened almost simultaneously to the Heroku leak and is thought to be connected to it. From OP's link, it sounds like they had another one recently—maybe another group re-using the first vulnerability?
If you don't communicate, you lose the narrative. As the article highlights, it seems Slack has gone out of its way to exclude this incident report from appearing in Google searches. As a result this article will likely appear first.
It is also worth nothing that other major breaches (LastPass) were preceded by source code breaches. The corresponding incident reports also included re-assurances that there was no impact on customer safety. The Slack incident report doesn't specify what type of Github repos were accessed, so it is hard to judge if any sensitive code has been leaked.
From the recent LastPass security incident report:
Based on our investigation to date, we have learned that an unknown threat actor accessed a cloud-based storage environment leveraging information obtained from the incident we previously disclosed in August of 2022 (source code breach)
Sure, but it's a way less misleading comment then the original bleeping computer article, which mentions the "noindex" tag without ever mentioning that it was applied only to duplicate content.
> The Slack incident report doesn't specify what type of Github repos were accessed, so it is hard to judge if any sensitive code has been leaked.
This is so important. Saying “less than 1% of code” is not very useful as a passwords or config repo being leaked may only be a few bytes. It’s also not customer data. But it’s extremely important.
I think the fact that they are being not forthcoming means they are clueless, or something really bad happened and they are weaseling and hoping nothing bad happens (spoiler: it will).
The article did mention that this breach did not include the “primary codebase”. Where is the rest of the code hosted? I’m guessing some private Git server (e.g. “on-prem” GitHub/GitLab Enterprise)?
I would be surprised if Slack disclosed publicly exactly which source code provider they used, but something hosted like Github EE sounds like a very reasonable assumption given their response. Maybe someone was given a goal to migrate their main source code repo to a shared Salesforce instance for security or cross team integration reasons, and everything left on Github.com was just the stuff too unimportant to migrate. If so, seems like it payed off for them in this case.
This is how LastPass started the efficient downward spiral, no? This isn't shocking, but also is.
Is it wise to simply keep private repositories away from GitHub at this point? It seems the best way to avoid being drawn in with the rest as a target.
> Is it wise to simply keep private repositories away from GitHub at this point?
This sounds like the equivalent of "Use MacOS because most viruses target Windows". Which some people do, and yes it probably lowers your risk, but there are ways to use Windows (and Github) securely.
If you're really paranoid about your source code, you should probably self-host your own instance of Github/Gitlab/etc (and lock it down through VPNs, IP white lists, etc) rather than using their cloud service.
This approach also gives you first hand access to logs that would expose internal bad actors, if that's included in your threat model.
Going to respectfully elaborate on this specific point, with full awareness that it's not core to the discussion, and that my opinion is my own and highly subjective, and that I'm not fully disagreeing but wish to point out a nuance. There are too many ways to put hidden software on a Windows system, which makes it harder to use securely than my OS of choice, macOS.
It's more difficult to audit a Windows system than a macOS system. For example, background processes can be installed on Windows in dozens of ways. Go download AutoRuns (https://learn.microsoft.com/en-us/sysinternals/downloads/aut...) and look how many tabs it has. The docs even say "You'll probably be surprised at how many executables are launched automatically!". UAC mitigates this somewhat, but it lacks granularity; it's all-or-nothing so you'll click Allow without knowing what's actually going to happen. And then you have COM registration and win32 APIs that let applications get at other windows easily. Layer upon layer of legacy compatibility make the whole system feel like a mess - at least to me.
Using macOS securely is also quite involved of course - for example auditing Homebrew's behavior when it downloads half the open-source world just for one package. But when it comes to stuff running in the background, by comparison, macOS Ventura recently added a comprehensive list of launchd agents and other daemons that run at startup; each of which can be toggled (though the UI doesn't allow you to drill into them). It's all on one pane in System Settings, built into the OS, no need for Sysinternals' separate tools.
Using macOS securely is a lot easier, in my experience. There are ways to use Windows securely, but it's a lot harder, and (I think) that makes it not worth the trouble. Using macOS securely is a lot closer to just using macOS.
That said, for any OS, it goes without saying to only install trusted software. It's possible to cleanse both OSes fully of software you no longer want, but on Windows it always feels like surgery, whereas 90% of cleaning something from macOS is dragging the application bundle to the Trash.
If you give a third party essentially limitless access to execute arbitrary code on your hardware, then you cannot really assert anything with certainty about that machine. That makes meeting most/all threat models difficult.
I get what you're saying but if my threat model doesn't include Microsoft or state-level actors that can co-opt Microsoft or buy/develop hoarded Windows zero-days, then a patched Windows box is probably good enough.
> The incident involves threat actors gaining access to Slack's externally hosted GitHub repositories via a "limited" number of Slack employee tokens that were stolen.
> While some of Slack's private code repositories were breached, Slack’s primary codebase and customer data remains unaffected, according to the company.
Wouldn't private git hosting add another barrier for the attackers? In addition to the tokens, they would also need to somehow gain access to the network through VPN or through an opening in the firewall?
* gitlab hosted internally
* VPN needed to access gitlab
* VPN requires a gsuite login, with 2FA
* You can only login to a gsuite account on a employer-provided machine (so no access to anything on a non-employer machine, even email)
You'd have to steal a company laptop, or social engineer yourself into a building with a desktop machine, to even start to get near the code repository.
The "must login from a employer-provided machine" thing can be disabled on a per-user basis by remote policy (via an administrator), if needed.
There are "application credentials" or "API tokens" that get generated for various purposes and live long.
There are many programs that do not support "credential helpers", abd those would need you to input an API token that is generated usually just once and has almost always has enough permission to clone a repository (at a minimum).
The `hub` and `lab` CLI tools work exactly in this manner.
For example using private bare Git repositories hosted internally at the company with mandatory OpenSSH U2F authentication (requiring a physical key like a Yubikey) would go a long way and requires two minutes to setup. Or a VPN. Pick your poison.
But then you lose convenience.
Slacked picked convenience and it led to the headline: "Slack's private GitHub code repositories stolen over holidays" and the top voted HN comment so far is: "Slack is selectively and deliberately limiting public access (discoverability) to the security breach announcements.".
The question is: was convenience worth it?
There are other tools. There are other ways to host. There are other ways to do CI/CD.
If CEOs pressure me to neglect security, I always tell them "Do you want to be on the top of the German news magazine Der Spiegel with a bad headline?".
Is it possible for a hardware token to prevent this in theory? You could use a hardware token to compute challenge-response answers without having external access to the private key matter, as opposed to using simple secret API keys like we do today. This is basically how an HSM works too.
But does it solve the problem? It prevents the keymatter from leaking, but the problem is that the API key was put somewhere sensitive and then stolen. Presumably you'd need access to be able to use the keymatter from any location where you'd normally need API keys: CI machines, developer workstations, production, etc.
Worse, if you have some machines (CI, HSM) that are accessed over network traffic and that traffic is secured with an API key, you are back to square one again, where a stolen API key allows for continuous access.
Would this help for auditing? No, not really. You can already have plenty of auditing and alerting to try to prevent exfiltration anywhere an API key is used.
In other words, probably not. It might help a little bit just by making exploitation harder, in optimal circumstances, but it doesn't fundamentally change the problem; if dev workstations or CI are compromised, you're going to have a bad time.
The most basic way to analyze security is to look at it in terms of cost to penetrate security versus the benefit to the attacker. Things are not a binary "secure" or "not secure", they are secure when the expected costs are greater than the benefits, preferably by a comfortable margin.
The problem is that there are some resources where the benefit to the attacker goes up faster than the company can afford to cover, and the company has to cover all the attack avenues. You can imagine that the value of Slack's source code can be high to certain people, and it can be difficult to completely seal off something like that when source code by its nature pretty much has to be distributed to hundreds or thousands of people. (At least overall; any given source may not be that distributed, but there will be thousands with some sort of access.)
Major corporations basically have to act as if source compromise is inevitable. Best practices obviously include partitioning who can have access to what (perhaps the best practical argument against a one-company mono-repo, if de facto it has to be broken up by access permissions anyhow) and not including anything in the repo that is directly security catastrophic if it gets out (security secrets mostly), but this is still limiting blast radius rather than "solving" the problem.
It's a perfect storm of being extremely expensive to cover, in terms of money, internal process friction, and having a lot of heterogeneous vectors you need to cover, and for certain companies, being very high value to a number of different kinds of entities.
Being in GitHub specifically would only matter if GitHub was itself compromised. GitHub can not do very much about legit credentials being stolen through completely non-GitHub related means. (Most things, if not all things, that might leap to your mind will also block legitimate usage quite a bit. No matter what crazy thing you can imagine, someone's probably doing it somewhere and it's probably mission critical.)
I expect there have been a great deal more source code compromises than we've heard about.
Does it actually matter? I might be overlooking something but...
Slack is more than its code. The product is really the aggregate of their engineer's knowledge and internal processes. It's not practical to steal code and build a business or spinoff product around it.
The only legitimate threat seems to be the potential for exploitation. I suppose this might also threaten any backend improvements (economic leverage) they made with proprietary algorithms.
I wish, but teams is bad not because Microsoft isn't capable of fixing it.
It is bad because regardless of how infuriating and frustrating the user experience is, people are still gonna be forced to use it under the massive pressure of Microsofts weight.
I neither expect it nor hope so.
I still remember when they bought Skype. Said they'd keep the Linux support to stem community worries then dropped it like a brick about a year after.
Angularjs? I'm impressed. I've got to maintain (infrequently) a Django monstrosity with a (small) Angularjs front-end, and I've always had a much harder time fixing Angularjs than e.g. Vue, even though there is relatively little Angular code. They must have designed it properly from the start.
This is exactly why secrets should not be stashed in any Git repository. Granted, I'm not sure they're much safer in other managed services dedicated to protecting our secrets, with the news of several breaches as of late :/
It would be so nice if GitHub deprecated their one key takes all and would let us create per-org personal access tokens. Also, organizations having power over these access tokens.
They do this now. I'll find a link. Link below. Organizations can choose to allow these, require approval before they are valid, and deny the use of traditional tokens.
Nope, its a bunch of little communities that are centrally managed and can force their users to a new platform easily. Network effects are minimal, sunk cost of integrations is a greater barrier to moving off platform but that is pretty weak as well.
Agree with the first half, but the integrations is a moat for Slack. First, a comparable integration needs to exist or be created on the new platform. Second, in largeer organizations, you end up with many vested interests (individual teams) that have important integrations and the cost to migrate them takes away from other work they need to do, and creates a chorus of voices against a migration. On top of all that, employees generally like Slack.
Everyone, and I mean this literally as everyone that I am aware of their opinion, in my circle started off as huge slack boosters and now they can't stand it. Slack has become a total mess. Threads are a such an anti-feature. Taking away snippets was such an absolute mistake. People are starting to move off of slack and onto teams and google chat, which everyone agrees are inferior, but slack has become bad enough that they don't consider it a major loss.
We've migrated from Slack to Mattermost some time ago. Best decision ever. Not sure why companies keep using Slack nowadays. It's like those companies many years ago using HipChat instead of Slack. Slack's is the HipChat of today.
Could you expand? I think that's incredibly unfair... Slack changed how almost every business communicates. Mattermost is almost a 1:1 open source copy, and is missing most of the ecosystem (which is Slack's real value).
Sure, for some companies, it's really important to own their data completely. I get that (although I personally trust Salesforce more than most one-off companies when it comes to security). But I don't think it's fair to say that Slack is the HipChat of today at all... it's like iOS vs Android, both are just different flavors of a really good product.
I think part of the problem is that there are a lot of integrations from Slack that aren't available on Mattermost. You can certainly make the argument that those integrations won't be developed until more users abandon Slack for Mattermost, but for some it's not worth the loss of functionality.
Self-hosted Mattermost and Gitlab can be done reliably at very low cost, had better uptime than their cloud services for us and would've likely prevented leaks like these. At least you can add some layers of security for self-hosted services so that stolen tokens aren't enough to steal code or messages.
We run it in our VPN with SSO for authentication. You'd need a company device, user credentials and a location that's whitelisted to access.
For smaller companies something like Cloudflare Teams and/or Tailscale would probably also add a good layer of extra security. I guess it's important to have some device whitelisting in addition to credentails, so that stolen credentials alone are not enough to access code or chat.
I hope github would start allowing all users to use ip whitelisting features. I understand it is a sell point for enterprise accounts, but tokens/passwords are not secure enough in today's environment.
They say no customer data was compromised and are investigating "potential impact" to customers.
The potential impact here is an attacker now has access to some of their code which could let them find and take advantage of vulnerabilities resulting in customer data being accessed.
Technically "potential impact" is correct but I think companies often underplay how severe a source code leak is. It's the exact blueprint of how their app is built.
> OK so does this mean that OSS is vulnerable by default ?
No, but I don't think that's a fair comparison. An open source tool or app is open source by choice so they have an advantage of knowing what they're getting into.
In Slack's case maybe they have a bunch of undocumented APIs which are publicly accessible and now with access to the source code you know what they are, and when you hit them they result in customer data being returned. It runs in their production environment off the live site so it doesn't involve anything crazy like you needing VPN access to their DB to get production data.
That's just 1 basic example of what could happen when a private code base becomes public due to a leak. I'd like to think an open source site wouldn't do that because at a fundamental level the app is built in the open. Also, there's likely many sets of eyes from different folks looking at it from different angles.
Having undocumented publicly accessible API endpoints isn't an option in an open source world but a private code base could maybe get by with security through obscurity on a few things thinking "well, the code is private...".
> In Slack's case maybe they have a bunch of undocumented APIs which are publicly accessible and now with access to the source code you know what they are, and when you hit them they result in customer data being returned. It runs in their production environment off the live site so it doesn't involve anything crazy like you needing VPN access to their DB to get production data.
In a way. The difference is that if you develop something with public source code, you're vulnerable from day zero of development. That means severe vulnerabilities are more likely to surface before the product is mature enough for those vulnerabilities to represent massive risks. OSS applications benefit from survivorship bias: the code that's there probably contains fewer vulnerabilities because any vulnerabilities that were there have likely been exploited already, allowing them to be detected and fixed.
Meanwhile if you allow a closed code base to grow and mature for years and then expose it to the prying eyes of the public, it might contain a few bangers that would have been caught much earlier if the code had been developed in public.
Of course this assumes that vulnerabilities get fixed and dev teams learn to avoid them once they've found them. This somewhat breaks down in typical plugin-heavy open source stacks where any random plugin might be developed by a 13 year old in their bedroom and no longer maintained and your flagship product now depends on it because nobody noticed its glaring security flaws. Though this can be managed and avoided somewhat by being more deliberate when picking out plugins.
EDIT: A really straightforward example is committing unencrypted secrets (API keys, passwords, whatever). In an OSS product this will likely be exploited almost instantly but this creates cultural awareness to not do that (either by learning from mistakes or by being told horror stories about those mistakes). In a closed source product this could easily go on for years with no consequences until someone exposes the code to the public.
Yep, this is very well put and tracking on the same page as what I was thinking too.
I've done contract work for a bunch of small businesses (1-50 dev team sizes) and the amount of bangers I've seen are quite high. It's a whole different world when the expectation is your source code is private and the code has been been around for 2-10+ years. Committing secrets is pretty common (sometimes accidental, sometimes on purpose) but it goes way beyond that type of thing. There's a whole culture around the app being private.
Hate to nitpick, but is code really the #1 asset of most software companies?
if AWS/Amazon's entire source code were leaked tomorrow, how would competitors use it to their advantage?
Would it help Lyft at all if they gained access to Uber's source code? Probably not.
Is some startup going to be able to take Slack's source code and use it to build a competing service? Probably not, it would be easier to write the code from scratch and reverse engineer features.
Source code is definitely a company asset, but most companies wouldn't be incredibly damaged or threatened in any meaningful way if their source code were exposed... unless said source code exposes security flaws that are used for some attack on the service. But security through obscurity (e.g. keeping insecure code secret) is not a valid way to keep things secure.
Leak of code can be very damaging, not just because it might reveal vulnerabilities but also because it will reveal IP and will allow all sorts of 'IP theft'. Then, losing the code altogether is the death of the company.
I guess my point is that for 90%+ of software companies, you can very easily reverse engineer the software or just rewrite it from scratch.
If I were given $50 million to build a Slack competitor, even if I had Slack's source code I can't imagine it would be that helpful. More than likely it would be easier to rewrite and re-architect in order to not inherit all of the technical debt of very old software.
Not to mention in most cases, access to the source code doesn't mean you can easily recreate the AWS / Google Cloud / Azure environments necessary to run the source code reliably with scalability.
Edit: There are obvious exceptions to this, but most apps these days don't require a whole lot of proprietary algorithms that can't be easily cloned without source code access.
There are instances of an overseas company stealing a large amount of code and just running it as-is.
As long as you do it outside the jurisdiction of the USA, you'll probably get away with it.
I have seen two instances of this in my professional career (can't talk about either I'm afraid!), but in both cases it had a pretty big impact on the original owner of the code, and in neither case did the company manage to get any compensation.
Offtopic, but, I made the mistake of trying to read this on my mobile phone this morning, where I don't use adblockers per se. Served at least 6 ads, even ignoring the nonsense at the bottom of the page. So annoying and disruptive that I didn't manage to read the story, and will find the details somewhere else.
We really needed a reaction against this nonsense which wasn't just adblockers.
Slack is selectively and deliberately limiting public access (discoverability) to the security breach announcements.