Also this --> Security researcher Vinoth Kumar told Reuters that, last year, he alerted the company that anyone could access SolarWinds’ update server by using the password “solarwinds123”
https://www.reuters.com/article/global-cyber-solarwinds/hack...
Wait, FTP as in actual goddamn FTP, or actually SFTP but we call it FTP?
I could make fun, except in 2018, I moved a company off of FTP and onto S3. To be fair to the company, no developer had worked there since 2016, so they were just running on autopilot. Still, anyone even vaguely concerned with security should have stopped using FTP since sometime in the early 00s.
Most of the time both are encrypted when configuring TLS, but it's not as easy as with IMAP or SMTP where you basically disallow all commands except STARTTLS.
> Still, anyone even vaguely concerned with security should have stopped using FTP since sometime in the early 00s.
Had the same discussion in 2018.
But if the weak link in the chain is "our (paying) customer is so tech illiterate that they need a printed out tutorial to use an FTP client, and switching to SFTP would anger all clients", what are you going to do.
In my case, the marketing department was using the FTP server, but their client also supported S3, so I told them to use the S3 credentials instead.
In another case around 2015, a coworker started and was learning the system. He found that a web store front was being powered by an FTP box that had a list of items in it, and the FTP box was periodically being updated by someone somewhere, but he couldn't figure out who or where. As far as I know, he never figured it out, and the mystery uploads continue to this day…
That's great. I've built similar systems and if I disappeared off the face of the earth the same situation would likely arise... SFTP has its issues but it works.
Sometimes you have to just say no, it's not worth risking the whole business. It's better to just write a client app for the customer that's easier for them to use.
In retrospect, I'm sure that's obvious to SolarWinds as a whole company. But at the time, these decisions get made by particular individuals. They're weighing an eventual low-probability risk to the whole company against direct, short-term impact on something their managers are breathing down their necks about.
I'm sure there were hundreds of decisions by dozens SolarWinds managers like this that worked out fine. They made some personally and immediately beneficial choice and got praised for getting things done on time and on budget. The latent problem, be it a security risk, a hidden pile of technical debt, or a cut corner, stayed latent.
Unless you can rejig the system to change the odds for those individual decision-makers, things like this will keep happening.
I would set up all new customers over something secure.
I'd have some flag so that new customers can't use something insecure.
If my service was used for private data, I'd force old customers to migrate. If not, I would advise them via a service bulletin, but let them carry on with their insecure setup.
I understand that in the world of business, security risks are like any other risk, and a dollar value can be put on every security improvement, and sometimes that dollar value isn't good value for money.
The safe, secure and completely free way to transfer large files around the Web.
What is ZendTo?
It is a classic problem: you need to send files to someone, or they need to send them to you, and there's no way except email. But they are too large or your administrator won't let you transfer the files by email at all. ZendTo to the rescue!
Why ZendTo?
ZendTo is a completely free web-based system, which you can run on your own server with complete safety and security. It runs from any Linux / Unix server or virtualisation system, there is no size limit and it will send files 50% faster than by email.
It's not a "service".
It's a web app you can run on your own hardware/VM.
All your data stays in-house.
That's exactly why corporates love it.
You keep total control.
And it's completely open-source (GPL) so you can tweak it if you want to.
FTP is fine if you have any sort of signing verifying your code. Your initial install probably shouldn't be over FTP, so that you know what you're verifying, but it's fine for updates.
The issue here was that their update pipeline was compromised and they were distributing compromised updates.
FTPS or FTPoTLS is admittedly a somewhat awkward protocol but it’s no worse than other older protocols like IMAPS. The difference between SFTP and FTPS is just a debate over whether to use SSH as transport or TLS. Basically every SFTP server out there is also an SSH server which comes with its own issues while FTPS servers are more designed to operate on entirely virtual users coming from a DB instead of system accounts.
The reasons for moving away from FTP is because it’s unergonomic and the software support isn’t there anymore, not just because it’s old. And if you use IIS on Windows you’re pretty much stuck with FTPS.
SFTP is completely different protocol. Notably, SFTP is not even a "file transfer" protocol (ie. GET/PUT), but full blown remote filesystem (which even has two distinct rename operations, one with unix and one with windows semantics).
Look, yes SFTP is an extension of SSH and is a completely independent wire protocol with some different features and semantics. But from the user’s perspective it looks just like a different flavor of FTP and all the security comes from the underlying transport. It’s remarkably easy to accidentally let your users accidentally have more access than they should with SFTP because SSH is a complex protocol. A good example is that “SFTP only” users can still port forward by default unless you remember to turn it off. And SSH doesn’t have public CAs so in practice it’s a lot of blindly accepting keys. And with FTPS you have to remember to turn off anonymous access and that that dynamic port range makes firewalls annoying. There are trade-offs and no tool is going to be better in all situations.
The idea that FTPS is bad and you should feel bad is more like the JSON vs XML debate than it is legitimate security gripes.
Here is one interesting project that lets you see in almost real time leaked secrets (or suscpected secrets there might be fasle positives) across Github, Gists, Gitlab, and Bitbucket: https://www.shhgit.com/
It's not just the initial compromise that's getting "sophisticated hack" headlines in this case -- it's the extraordinary care that the attackers took to evade detection afterwards. Among other things: delaying communication to home base for a couple of weeks after the backdoor was initially installed; use of steganography to make that look like ordinary network traffic afterwards; use of entirely new exploits, not in any common database, to avoid detection triggered by a match.
While the specific implementations in this case were unique and clever, these tactics aren't particularly uncommon. If you have direct access to SolarWinds' signing credentials and download server, the rest is just a matter of how much time and effort the attacker is willing to put into it.
Any "sophisticated hack" is likely to have some mundane details. This might've given an avenue to start exploring; it doesn't seem likely it's what let them sign binaries by itself, and it doesn't change the cleverness of the steganographic techniques described in the FireEye analysis.
This isn't really a mundane detail though. The fact they have non secure (not even SFTP or FTP over SSL) FTP access open to their main downloads page with a ridiculously insecure password is pretty startling. I can't even understand why they had FTP running like that.
> Earlier this year we received a report from Selamet Hariyanto who identified a low impact issue in our Content Delivery Network (CDN), a global network of servers that deliver content to people accessing our platform around the world, where a subset of our CDN URLs could have been accessible after they were set to expire. After fixing this bug, our internal researchers found a rare scenario where a very sophisticated attacker could have escalated to remote code execution. As always, we rewarded the researcher based on the maximum possible impact of their report, rather than on the lower-severity issue initially reported to us. It is now our highest bounty – $80,000.
I have found a LARGE social media having a similar kind of vulnerability. Downloaded some data as Proof of concept. There's no way to reach their CEO and no security contact.
Check HackerOne to see if the company is listed. This is a site where white hat hackers can post vulnerabilities and in return get rewarded for bounties.
If you have information on a vulnerability for a large social media, you might be looking at a nice reward. https://www.hackerone.com/product/bounty
If you read the reports, rather than second or third hand coverage, note how the attacker concealed their activities after they got in. That’s what sophisticated refers to - breaches are common but avoiding detection for so long on so many networks is not.
That last is actually rather hard to determine. Detected attacks tend to divide into two groups - the scripted continuous probing that's just background noise these days (but if you have access to the logs on any internet connecting device in front of a firewall you will typically see it being probed at least every 10 minutes), and targeted attacks - where the first rule is to be very careful, lay low, and perform careful data exfiltration. Reading the reports on the kinds of the latter that were discovered, and in particular how they were discovered, suggests there is probably much more of that going on than is generally realised.
The news and press defined the "attack sophistication". An attack of this scale definitely needs some amount of sophistication but on a difficulty scale of 1-10 to infiltrate their network, I would rate it a humble 0.5/10.
> We have been advised that this incident was likely the result of a highly sophisticated, targeted, and manual supply chain attack by an outside nation state...
This is egregious, for sure, but it doesn't explain how a DLL signed with their certificate ended up in the wild.
Have SolarWinds' handling practices for their code signing certificate come to light? It's sounding more and more like we're going to find out it was a "PFX file w/ the password 'password' saved on a network share" kind of situation.
This is wild speculation but just to highlight how FTP could be leveraged to get a signed DLL. Imagine SolarWinds has a "sign everything in this directory" script. Attackers gain access to that trusted directory either directly via FTP or by pivoting off FTP access. They just plop their DLL in there and let SolarWinds auto-sign it. No cert required.
I implemented an HSM-based signing service for a firmware attestation system a few years ago. Authorization to sign and an audit trail of signature requests were a big deal. Something like the "plop a file in a directory" would make me weep.
> This raises many questions. Were the certificates used to sign the binaries obtained from that public GitHub repository or, from any other information leaked publicly. Exposed certificate could have allowed hackers to sign their malicious SolarWinds Orion binaries and pass them off as legitimate software developed by SolarWinds, subsequently uploading them to the Downloads server with the previously found leaked FTP credentials.
They might ask if password complexity/rotation is enforced. They probably don't ask if any accounts (service accounts, etc..) are excluded from rotation. People are also taught to not volunteer anything to an auditor. ... and this does not even get into the subject of ssh keys and ssh CA's, which most auditors don't even ask about at all.
I personally put less value in audits and more value in Red Teams being given full immunity to penetrate every nook and cranny. I would like to see more companies incentivize and reward in-depth penetration testing in all environments, including production. For the corporate leaders reading this, there is risk, but the reward is uncovering many future landmines your operations, code deployment teams and internet user base would have stepped on.
This is true. At my $PREVIOUS work place, we answered all questions from auditors and RFPs using the most favorable interpretation towards us of the question.
For example, one of our audits for our product asked if we implemented admin idle session timeout with timeout of less than 15 minutes. There was 3 admin panels: one with 10 minute idle session timeout, one with 1 hour idle session timeout, and one with no idle session timeout. The manager said after I explained the 3 admin panels to him something like "since one does have idle session timeout of less than 15 minutes, we answer yes to the audit question".
> we answered all questions from auditors and RFPs using the most favorable interpretation towards us of the question.
I am currently living this life. The problem is that the system is setup as a race to the bottom with opposed incentives.
If I answer with the strictest interpretation with my paranoid blue-team attitude, we appear worse than our competitors and are immediately in a worse business position - regardless of our relative or absolute security posture. This is why the Department of Defense is moving away from self-attestation in 800-171 to outside assessment in CMMC.
Why not standardize on ISO and SOC2? I dont know very much about it, but I suspect those big-boy standards arent suitable for small-business America/sub-subcontractors
> Why not standardize on ISO and SOC2? I dont know very much about it, but I suspect those big-boy standards arent suitable for small-business America/sub-subcontractors
Also, how is that any different? I've been through SOC2 and it was just the same "here's a bunch of questions that the auditors want us to answer and provide evidence for". Maybe a little better, but still something that you could bias by providing answers that were ... technically true...
To add to this, SOC1/SOC2 is only as useful as the security policies and standards and that assumes they were written around everything the company actually does. Auditors are only validating to some degree that you are doing what you say you are doing, but what you say you are doing may not be all-inclusive and respective of what you actually do.
Audits don't involve fine-tooth-comb review of everything - PwC does not go through your git commits line-by-line. They tend to verify specific critical things and verify process for everything else.
To the extent your processes don't match what is reported, your audit is a work of fiction. And of course things will always fall through any available cracks, people skip process for various reasons, etc.
In the supply chain, not necessarily. Any software that goes onto a classified information system is supposed to go through a security review and audit, though. Based on my first-hand experience with various DOD agencies, though, I don't think many of the people working the information security side of the house are as up-to-date on technology as they should be.
Even casting it as something like, "security audits do not ensure security sufficiently to be worth the cost and effort" is incorrect.
You can certainly get an auditor who is green, incompetent or has their own agenda. Welcome to humanity. But on average, they do more or less what they're supposed to do.
Keep in mind the binary files that contained the backdoor were digitally signed by SolarWinds after being tampered with. So this FTP credential leak might be part of the supply chain compromise, but is not the whole enchilada.
This was posted yesterday and flagged into oblivion. Unless new information has come out, there wasn't and still isn't any reason to believe this leak had anything to do with 2020's megabreach.
Edit: To clarify, this version follows up the flagged post with a bit more information and a lot more speculation. It does a lot of work to "not" make claims while setting them up and basically making them.
Yeah, this is a less-but-still-misleading follow-up. The password might turn out to be related or might not, but I don't think the added details point either way.
This security company's marketing blog is mixing up the facts with speculation.
Let's especially be careful not to use it in the "nation state vs. kid in a basement" debate.
The researcher added there may be some certificates exposed in that repo which may have been used to sign the binaries. It's still a relevant update.
Especially the information that the repo was archived by Web Archive back in 2018. It's not easy to know the "who" but the how can be speculated and investigated.
I think I snuck an edit in while you were writing. Sorry!
I see where the researcher says it's not impossible he missed a certificate, but not a reason to believe he actually did. The article is unfortunately full of leading questions and reaching speculation.
That post editorialized the title to imply that the leaked credentials were part of the breach. The article itself, and this post's title, just note the fact it was out there.
Considering how often people commit their credentials, it seems Git or GitHub need a --i-need-a-nanny option that would check if you're about to publish things that you would not rather publish. But then it would have to be an installation option because the people who goof up like this would not know/remember to add that CLI parameter; but I would bet a lot of people would turn on said nanny option that would warn you if you're about to commit lines containing the word "password" or "secret" or "key" (obviously it would have to be more clever than this simple text compare, otherwise it would warn on lines like "function checkPassword(String userSuppliedPassword) {"...
Someone in my company committed an AWS credential to GitHub public repo and AWS actually emailed within few hours saying that the access key has been deactivated or something alike. AWS seem to be scanning the public repos pretty actively for such instances.
I've been stupid enough to commit AWS creds to Github twice. (At least twice?) The first time AWS emailed me within an hour. The second time about 20,000 French language spam emails were sent out and I only realized the issue when AWS suspended the credentials because of the bounce rate on the emails. Thankfully only ended up costing me a couple of bucks, but scared the hell out of me.
AWS now sends emails if you commit access keys, even better they are catching loads of other sensitive info having privileged access to the GitHub API. Cool and creepy at the same time, right?
There are lots of tools that do that. Obviously it’s always going to have false positives and false negatives. A dev environment that makes it easy to deal with secrets the right way is probably better
There are also ways of using Git without a repository in the cloud. As a security oriented company, it seems a good policy to avoid using cloud services as much as possible.
you could add those checks to pre-commit hooks. However, the problem with those hooks is that they need to be added locally by user. There are already modules/libraries with sets of regexes that are able to perform filtering you are suggesting.
Another option is to use pipeline to perform those checks. Sure, by the time pipeline runs, the secrets are already in the repository, but at least you caught them early. However, in this case you should definitively do secrets replacement.
Almost certainly. But people who are scraping GitHub for leaked credentials are almost certainly doing so from IPs that would be hard to link back to them. Through VPNs, tor, compromised devices, etc.
Not to mention that you'd be tracking down people who may have not (yet) done anything illegal.
This type of thing can happen easily with Git. We just submit a directory, and every damn file in that directory gets submitted.
What I do, is have a file that aggregates my sensitive stuff (like server secrets and whatnot), and call that file something like "DoNotCheckThisIntoSourceControl.swift". I then add a git ignore line, on that name.
I'll also sometimes store it outside of the repo root (I use Xcode, so I can drag files in from anywhere).
What is interesting here though that this, like the Covid-19 leak in Brazil, the leak was on an employees GitHub. Not a companies account. So sure the employee could have prevented it, but from a company perspective. They have no authority to enforce coding practices on a personal GitHub account.
The only thing I can see preventing this at an organisational level is a DLP solution scanning the repositories (GitGuardian does this for example)
You can put detection in the CI/CD pipeline to prevent from getting into the repository. And in any case. Knowing the horses have run away as soon as possible is pretty essential in damage prevention.
What is interesting for me here is that this leak, like the Brazilian covid leak, happened because of an employees GitHub repository. Which companies have no authority over.
GitGuardian at least scans the GitHub accounts of employees though.
True, but I have always taken the position that security starts before we write our first line of code, and should be something that we keep front and center, every line we write.
When I write a line of code, I have an automatic "red team" approach. I think "I'm a blackhat hacker. How do I leverage this line?"
Not a 100% guarantee, but it sure does help the result to be more secure.
In my experience, 99% of security is good old-fashioned horse sense.