Rather than writing up a huge reply with more info, I'll link to another reply with more details on this same thread since someone else already did the writing: https://news.ycombinator.com/item?id=10845985
< naqod> alexf: any legitimacy to this https://news.ycombinator.com/item?id=10845619 ?
<@alexf> naqod: again I'm not in ops so I don't have the deets, but my gut reaction is to say No effing Way
You'd imagine that by now they wouldn't be so quick to deny this stuff.
The only evidence the prosecution had against me were a list of compromised sites and several coldfusion 0days I had in my possession.
They could never prove that I generated the list of compromised sites, but the judges felt that the possession of said list was enough evidence to convict me.
We would've appealed but there was no point as the sentence was essentially nothing.
But you do end up with a record, which is not 'essentially nothing'.
Unless, of course, the group involved in the breach are also the ones unleashing the DDoS attacks. Which would also make me think there has been communication between the group and Linode, in contrary to what Linode stated.
I know we left them 4 years ago because how they implement bandwidth caps on private IPs, which a few years later another company also had the same problem with and nicely wrote it up on their blog: https://docraptor.com/blog/gone-in-60-seconds-how-we-moved-f...
Linode's attempt to keep us (spending $5k a month at the time, but we've grown substantially with AWS now. Linode were at $22m in 2011, so we'd have been 0.3% of their total revenue): "We will certainly be sorry to see you go."
That may just be because they were fed up of us after we opened 27 support tickets about the same networking issue over the course of 11 months though.
Not nearly as much pressure to respond correctly.
To clarify and reiterate--employees who responded to tickets quickly were praised, even though there response contained half truths or outright falsehoods. If someone took 15-60 minutes ( or more ) to deep dive into an issue for a real fix for a customer, they were shamed and got a talking to.
How did you find out about the illicit login?
Hashes can be turned back into plain text, it is just computationally expensive to do so. Hashing only slows down an attack (and or increases the cost), it doesn't not mitigate one. In particular if the hashes aren't salted then a rainbow table is an extremely effective way of breaking all of the hashes concurrently.
The main method of doing so: Generate the hash for every combination of typable characters up to a given length (e.g. MD5() A-Za-z0-9 & specials up to 8x characters).
This can be mitigated using a more computationally expensive hashing routine (or increasing the work factor on a less computationally expensive one) and salts.
But given enough time OR computing power, all hashes will be broken. AWS makes breaking hashes a lot cheaper as you can bid on spare capacity and perform the operations relatively cheaply.
If you're generating every single possible password up to e.g. 8 characters the password's quality doesn't matter, only the length does.
I wouldn't consider an 8 char password secure, no matter what the entropy is.
But the most effective "rainbow table"-like table is a look-up table with the followings:
* leaked password in plaintext, associate with email and any ID (forum username??)
* hash all of those passwords without salt
* hashes (with salt) of known leaked passwords (you try pas$w0rd and found a match for some hash with salt) - this only works if your attack succeed.
If you do a quick count you won't be surprise most passwords are fairly short and simple. If two complex passwords appear to be very similar, you can assume with a good probability they are used by the same person. You can learn some private data from just looking at password (e.g. birthday, pet's name, door number, company they worked for, sport team they root for, which many turn out to be the crucial hint or actual answer to security questions.)
I have never opened or downloaded any leaked data and don't know if it legal for use at all, but the black market probably has over petabyte volume of such data available.
It would be very interesting to see the whole world attack couple hashes per day. Imagine you go to a website, it gives you some plaintext, and you run a couple quick scrypt with random salt, and return the response. Now with a billion online users, run this every day once, you may end up finding one successful match of "this password == this hash with this salt" once in a while. But hey, that's what botnet can do...and then bitcoin!
What you describe is not rare, can be done with cross-site scripting. How it happens depends on the injection method (perhaps SQL injection).
In fact, depending on how the sessions are managed the attacker might just need read access to log in without a password.
Weak password? Weak hashing algorithm?
1.The bitcoin hacks, March 2012
2.HTP hack, April 15, 2013 (CF exploit)
3.Second HTP hack April 16, 2013 (Another CF exploit)
4.MySQL server that allowed anonymous logins (?!?!) January 19, 2014
I'm not counting their domain name and various other parts of their infrastructure provided by third parties being compromised, but if I was the list would be significantly longer.
First 2013 attack was apparently exacerbated by cleartext password storage for LISH (their management shell) and API tokens https://marco.org/2013/04/16/linode-hacked
The 2012 Bitcoin attack involved a breach of Linode's customer service portal
Today's attack is some kind of unspecified or unknown breach involving Linode manager.
I guess the obvious commonality here is that all the attacks target the "soft" Linode layers AROUND managing deploys of Linux and Xen/KVM/UML rather than the "hard" targets of those widely used systems. This also happens to be the layer where Linode should be adding value (as opposed to the cheaper VPS providers out there) and I think it's increasingly troublesome that they continue to have such severe security issues.
Is this company (CEO - Christopher Aker) not investing in security staff, security training, best practices etc, or are they investing tons and just getting breached because they host so many sites? Unclear. But it's easy to imagine it's the former, from the outside, given all these incidents.
But yeah, people get hacked through their hosts all the time. Best approach is colo with minimum access for the dc staff.
Then there are other personalities that went dark, whom are presumably also not arrested.
Unless Thomas Asaro can name names, that was a bluff.
I've been consistently saying this for years. Linode is a joke and you would be crazy to use them for anything other than toy/non-critical use cases.
[Linode developer here]
Been working on (re)writing things in Python since I started. It takes a while, but I think everyone recognizes that the CF codebase is difficult to maintain. The good news is that significant progress is being made and we're still doing routine audits of the existing stuff.
This doesn't work, auditing coldfusion code is impossible without auditing the entire platform. The whole platform is so full of bugs and strange behaviour that it's actually impossible to produce secure coldfusion code.
MySQL server that allowed anonymous logins
They try to downplay it pretty hard.
database accessed using old forum credentials
"Effective immediately, Linode Manager passwords have been expired. You will be prompted to set a new password on your next login. We regret this inconvenience, however this is a necessary precaution.
"A security investigation into the unauthorized login of three accounts has led us to the discovery of two Linode.com user credentials on an external machine. This implies user credentials could have been read from our database, either offline or on, at some point. The user table contains usernames, email addresses, securely hashed passwords and encrypted two-factor seeds. The resetting of your password will invalidate the old credentials.
"This may have contributed to the unauthorized access of the three Linode customer accounts mentioned above, which were logged into via manager.linode.com. The affected customers were notified immediately. We have found no other evidence of access to Linode infrastructure, including host machines and virtual machine data.
"The entire Linode team has been working around the clock to address both this issue and the ongoing DDoS attacks. We’ve retained a well-known third-party security firm to aid in our investigation. Multiple Federal law enforcement authorities are also investigating and have cases open for both issues. When the thorough investigation is complete, we will share an update on the findings.
"You may be wondering if the same person or group is behind these malicious acts. We are wondering the same thing. At this point we have no information about who is behind either issue. We have not been contacted by anyone taking accountability or making demands. The acts may be related and they may not be.
"The security of your data, the functionality of your servers, and your confidence in Linode are extremely important to all of us. While we feel victimized ourselves, we understand it is our responsibility, and our privilege as your host, to provide the best possible security and service. You can help further enhance the security of your account by always using strong passwords, enabling two-factor authentication, and never using the same password at multiple services.
"We sincerely apologize for the recent disruptions in your Linode service. Thank you for your patience, understanding and ongoing trust in Linode."
Sad. We spent >$10k/month with them for a while, before their shit started falling apart. They didn't appear to care at all when we left, so I suppose they have an awful lot of large customers who just don't mind their stack disintegrating beneath them every sixty days at most.
At what point should Linode start thinking damage control? DDoS, breaches, lack of transparency...
Once we'd done the initial up sticks and move to AWS, our first priority was to use their redundancy and failover to the fullest (six months of sleepless nights due to linode made this rather front and foremost in our minds) - so nothing that's happened at AWS has ever been anything more than an inconvenience - we've managed five nines since the move - before, we managed one.
They did obligingly increase these caps when we begged them to do so, but at that point the writing was on the wall, and we kept on bumping into other weird and wonderful limitations and issues, such as the fact that someone running an intense job on the same host could bring our VPS's to an absolute crawl.
It really is a shame, as we desperately wanted to make it work, as we liked Chris's hands on approach (very much like ours), but ultimately our confidence in them was so eroded by the point that things started going genuinely wrong on their end that we had no choice but to leave.
As I said, we kept random small single-server stuff (wordpress sites mostly) there, as if you're not dealing with their networking, performance is generally OK - but the network limitations were an absolute clincher for us, and it was at one point literally every day that we'd find that one of their switches had broken, or we couldn't ARP IP's for no apparent reason, etc. etc.
Unless they've moved?
Likely the server hosting the blog is being DDoSed too. I feel bad for them, but they should know a bit better already...
Attacking a blog talking about the hack? It sure seems that someone has a grudge against Linode. :-/
Linode, Github, Stackoverflow, Imgur, they've all been targeted. But what do they have in common? In a word: popularity. The core reason these sites are targeted is because it is impressive to others.
The source of this is typically two fold:
- For the lolz. Someone with a botnet just wants to show off, taking down something known gives them more notoriety.
- For a sales pitch. Someone has botnet capacity that they want to sell, and "I took down Linode" is a great way of demoing that to potential buyers.
The first one is more common when someone finds a new method of traffic multiplication and just wants to show it off (i.e. they trick a third party into DoS/DDoSing a target). The second one is legitimate criminal enterprises.
April, 2013: https://blog.linode.com/2013/04/16/security-incident-update/
January 2014: https://blog.linode.com/2014/01/19/an-old-system-and-a-swat-... (they mention the breach almost in passing...the swatting is unrelated)
I really want Linode to do well, but this may be it for me.
I love HN.
Using the access gained within the Linode Manager, the attacker reset the root password on a few systems, and used Lish to gain root access. We were alerted to this activity and fully revoked the attacker's access within 60 minutes of the first node being compromised. Working with Linode support, we discovered which user account was being used and completely deactivated the user. We also isolated the VMs, and performed forensics on read-only copies of their disk images.
In our situation the attacker knew one of our user's passwords and MFA secret. This allowed them to provide valid authentication credentials for an account in the Linode Manager. It's worth noting that all of our active user accounts had two-factor authentication enabled. An interesting data point was that the user who had their account compromised was no longer in possession of the MFA secret themselves. Their cell phone had been reset (thus deleting all data) 8 months prior. The user could not log in to the Linode Manager if they wanted, so it was our determination that the key could not have been obtained from the user and was more likely on Linode's side.
We also have evidence from access logs provided by Linode that the attackers tried to authenticate as an ex-employee, whose username ONLY existed in the Linode database. It was absolutely unique and was not used elsewhere by the employee making the username an accidental honeypot. This was another piece of data supporting that Linode was the source of our compromise.
We immediately reached out to them not only to inform them of their compromise, but to assist them in investigating it. We were confident that the Linode database had been breached, and that the secret key used to encrypt information in the database had been compromised as well.
In addition to reaching out to Linode, we also worked with a third-party security firm to audit our work done during the incident. Likewise, around the same time we reached out to law enforcement to assist in investigating the attack. I believe our public disclosure includes this information. This was in the middle of July 2015.
We did not get confirmation in July that there was a breach of the Linode Manager or any associated credentials.
In the end, we migrated away from Linode because of this breach (even before it was publicly disclosed) in Aug 2015. We also never were able to confidently disclose that Linode was the vector due to lack of confirmation from their end. While all of us who responded to the incident were confident they were the source, we now thankfully have the data to confirm it.
I worked at Linode for just under three years, and worked on quite a few different things there. I started on support and moved on to a development role (including writing ColdFusion). I left Linode on good terms. California is much more enticing than NJ, so I wanted to relocate. Plus I was interested in doing more of an Ops role, instead of working including customer-facing web applications. I'm still enjoying it. :)
I think there are lessons that can be learned whenever a company has some sort of security incident. This is especially true if they are willing to publicly disclose details of the incident. We've wanted to provide what limited information we had, but wanted to wait until we had confirmation that Linode was the vector.
While there is some relief in finally determining what we believe to be the vector of our attack, it's very unfortunate that Linode engineers are dealing with the fallout right now.
I honestly don't understand why anyone would be stupid enough to use Linode.
They continue to (a) have incidents and (b) fail to disclose them in a timely and transparent manner.
Why were people still using Linode after their poor handling in the 2013 hack?
Keeping logins of ex-employees on 3rd party systems is a no-no though I admit full removal might pose some hurdles.
The honeypot user would not have been able to access the account had the credentials been valid, but based on the information given by Linode we did see someone attempt to log in as that user only once around the time of the compromise.
1) As someone else pointed out, you're an ex-employee of Linode. You went out of your way to hide this fact. I'll refrain from listing all of the very obvious reasons why your word on Linode should be taken with a grain of salt at the very least.
2) Being able to blame Linode for your own data breach is a fantastically easy (although lazy) way to pacify customers about the fact that their personal data was just pilfered by someone on your watch.
All that being said, what have you presented that can be proven? All that can be proven is that you're an ex Linode employee. Everything else is hot air that we're all meant to take your word. Tons of appeal to authority in your explanations. You keep invoking some mysterious third party "expert" security group who conveniently agrees with everything your own company "discovered." If you were actually confident in your own abilities and that of your team members, there wouldn't be an immediate appeal following every attempted assertion.
Plus, even if you really did hire someone, what company isn't going to just say "yes" and agree to whatever PR campaign their customer is saying while dumping wheelbarrows of cash into their pockets? Frankly I don't believe you, and I find your consistent drumming against Linode to be highly suspicious in the wake of these attacks. You're not involved, are you?
The Linode post you're referring to is just saying that they expired everyone's passwords. That's not admitting anything, especially not admitting anything about a separate incident from a year ago. What lawyer would ever take this and say "okay, you can legally publicly blame linode now?" No lawyer worth his salt. In other words, you're full of shit. Your story if full of holes, tells, and I think you should stop posting so much garbage before you're on the receiving end of a lawsuit or are considered a suspect.
TechnikEmpire January 6th, 2016 at 10:28 pm. It's hilarious watching all of these armchair experts criticize Linode for the actions of another.
PagerDuty and WP Engine were both compromised 'inexplicably' during the same timeframe at the same hosting provider. Seems pretty self explanatory. Linode didn't disclose their "security firm" so why should PagerDuty? Linode couldn't explain how accounts were accessed and it isn't the first time! Linode is hacked once a year; it's a feature. They need to get their shit together and stop pretending security is a game.
> If you lose your token and get locked out of the Linode Manager, email firstname.lastname@example.org to regain access to your account.
> Should you need us to disable your Two-Factor Authentication, the following information is required:
> An image of the front and back of the payment card on file, which clearly shows both the last 6 digits and owner of the card.
> An image of the front and back of the matching government-issued photo ID.
There doesn't seem to be a way to say "I have my big boy pants on, don't let anyone in under any circumstances". This is the first 2FA setup I've seen that still allows bypass by contacting support.
EDIT: I also find it odd that you have to manually generate a scratch code, and they don't automatically generate it for you. Again, all of the other 2FA setups I've gone through have done this.
Sadly, this is quite common, especially with organizations that provide phone support.
This is not the kind of thing I want to learn from HN.
Concerning since I had to reset my password and regen my 2fa using only my old password...
Do you guys know any good providers? Not AWS.
That being said, they only have one data center in NA at the moment, and they recently suffered a rather nasty network outage in that data center. Would definitely look into them if your target market is primarily EU, but if you're mostly NA-based, YMMV.
Their uptime and infrastructure is great, but you're totally on your own (e.g. if you're getting DDoS'ed). Also their traffic cost is not as easy to predict than DO/Linode/Vultr.
This sort of thing almost became background noise when I worked there. I can say that things improved somewhat after the HTP compromise, at least for awhile.
Its part of why they hired someone in July to rebuild it in python.
If they can't be bothered to invest in their tools and processes then this is the sort of thing that happens.
Vultr, Ramnode, Wable, iWStack
Or, depending on the number of VPS's you have, you might like:
- Aliyun, the cloud service from Alibaba
- A dedicated server from OVH, or their mid-tier brand, SoYouStart. This is my personal favorite. They have real DDOS protection, Data centers in North America and Europe, reasonable web interfaces, lots of available IPV4 space, and DIRT CHEAP prices. Run proxmox as the distro, and you get a decent interface to create and manage VPS instances.
The most important piece would be to try and split instances across at least two of these providers so that you have some fast recourse if something goes wrong. For the things I'm running, doing a nightly rsync of the data from one provider to another suffices as reasonable insurance.
However, I've yet to set a public key authentication scheme that users would find acceptable for web applications. Do you really expect all users to setup x.509 auth in the browser?
What is your public key solution to authenticate the web-applications that customers demand?
There is some minor evidence remaining in the .bash_history that is curious.
4 ls -al
5 chown syslog auth.log
6 ls -al
7 chown syslog kern.log
8 ls -al
9 chown syslog syslog
10 ls -al
11 echo -n '' > /media/xvda/root/.bash_history
12 echo -n '' > /root/.bash_history
13 echo -n '' > /root/.viminfo
14 L=$(find /var/log -type f); for F in $L; do echo -n '' > $F; done
15 rm -rf /etc/ssh/*_key* #remove host keys
16 rm -rf /var/lib/dhcp/* # dhcp leases
17 echo "echo 'options rotate' >> /etc/resolv.conf" > /etc/dhcp/dhclient-exit-hooks.d/rotate
19 ls -al /var/log
21 ls -al /var/log
24 ls -al /var/log
27 adduser in
28 su - in
29 vi /etc/sudoers
30 vi /etc/gro
31 vi /etc/group
32 groupadd --help
35 groupadd --help
36 vi /etc/group
37 su - in
this has been a hellaciously thorough attack.
Which is kind of hilarious.
We detached this subthread from https://news.ycombinator.com/item?id=10847715 and marked it off-topic.
If it wasn't, I wouldn't have referenced it
From what I gather, he thinks law enforcement is hilarious.
you could have made the comment more substantive in a way that was clearly not a personal attack, and that would have solved the problem.
So if the rumor is true, he wasn't technically lying.
Whether any of it is true or not remains to be seen. He's not ever given me any reason to believe he's lying, however.