Remember that depending on where you're based / your agreement with your bank that you might lose any standard "protection" for any fraudulent transactions that occur after you learn your cards been compromised.
I would assume a large majority of their clients don't read HN and would have no line of sight to do the same.
It's unfortunate that this happened, but I'll be looking for another host in the near future.
Edit: Fixed spelling mistake.
I mean, you have this Linux company that's pretty much an open source champion to us Linux guys and then... Adobe ColdFusion. Really?
I know nothing about ColdFusion, don't get me wrong, I just find it an incredibly odd choice for a Linux company.
That having been said, I think Linode just learnt a really unpleasant lesson and will become an even more secure provider for it. I believe that moving my sites away from them at this moment would be like selling stocks when they're about to go up.
It's a bit surprising that they haven't moved away from CF, but this is a good example of how hard it can be to transition away from legacy technologies. It's also not clear that security would have been improved by adopting, say, Rails, given that there has been no shortage in security issues there either.
My guess is that this was either licensed or not considered a core feature and handed off to someone who was already invested in Cold Fusion.
1. e.g. nested loops were broken so for i: for j would repeat j i times rather than correctly iterating the inner loop. The app server tended to leak memory and crash, particularly on non-Windows platforms, and the automatic restart mechanism was completely broken.
In the meantime, Perl works for us. It's not cool or sexy but it gets the job done. Probably a similar situation at Linode.
What is not cool about Perl? You don't need to transition to a new framework outside of Perl. You can upgrade your backend by investing your time in rewriting the Perl code to newer standards.
> What is not cool about Perl?
econd, in general, good developers are hard to find by design (e.g., to avoid recruiters, but also because they are most tof the time already doing interesting work).
Lastly, yes, it's hard to find good Perl programmers to do boring work, under boring conditions, at a boring wage. But then that's true of all good developers. Try to find a Rails programmer to remediate crappy code (I've tried).
We've had far more success at hiring good PHP developers and teaching them Perl then we have had at finding the Perl developers that can write web technologies.
But this might just be a facet of our location (a suburb that's more than an hour away from any city), so all the developers of "niche" languages (such as Perl has sadly become) tend to get drawn to the city and higher wages. Where as those who are still good developers but not as money driven tend to pick the more popular languages because they'll pick the work in the area they want to work. (that's probably an unfair generalization statement, but it's true for the local area where I live).
What's your product or service? Now Perl programmers are now seeking -you- out :)
And I'm sure you already know about the YAPC job fair: http://www.yapcna.org/yn2013/job-fair.html
We do online games (amongst other things). Though I don't want to go into too many details as I try to keep a degree of anonymity online.
Perl is my first interpreted language. It has a very special place in my heart, even if it was awkward at first to get used to it coming from a C background. Perl will always be a cool kid.
As for 'cool' I'm not so sure. Perl will be a niche sandbox for quite a while, but most run-ins with Perl (as of late) have been trying to remove it for something else. I think choosing a more accepted language has legs for a variety of reasons including code portability and long-term maintainability, both stemming from the fact that if there's only one guy maintaining all of the Perl code then that's a very-bad-thing.
But, that's just me. To wax nostalgic on Perl and use that as a basis for use-case seems, at least somewhat, reckless.
Rewrites where you're already intimate with the code base don't take that long for non-core stuff. I assume it's the web interface that's written in CF (which you can see from viewing the source in their signup and login pages). I guess you could argue it's core to their conversion process, but I mean their core skills are servers and their maintenance rather than web development so they probably actually haven't spent that many man hours on it.
youtube was always done in python as far as I understand. http://www.youtube.com/watch?v=G-lGCC4KKok
But yeah, back in 2002-3 coldfusion was an option.
And trust me, there is no way Google ever would have said "rewrite it in Python." They would have said "rewrite it in Java," like every other frontend they run (Gmail, Calendar, Docs, etc).
When something makes PHP look like a bastion of sanity and competence, you really should run a million miles in the other direction.
1) It was most common then to go the cgi-bin route for server side, which wasn't pleasant. CF offered an alternative to that very early on and made webdev more "accessible".
2) ColdFusion was baked into a lot of GUI based web design tools (I can only think of the name of one, NetObjects Fusion, because I rarely used those sorts of tools). Even though they probably weren't using a GUI based tool, being baked into several meant that peoples' exposure to CF was pretty common. I did absolutely nothing with ColdFusion, but routinely ran into it with freelance work I did.
3) In a large "enterprise" shops, I remember losing a bid because a manager Google(tm)'ed PHP, saw that it was originally stood for "personal home page" and told me they needed to hire someone more professional (thank God I dodged that bullet). The comfort with the current offering of open source web programming languages, in general, made managers uncomfortable for various FUD reasons. Case in point, I remember on the Windows side CF jobs getting eclipsed, nearly overnight, by RFPs for ASP in VBScript. ASP, after all, was backed by Microsoft, commercial, and effectively free (included with their Windows NT/2000 Servers).
For example, Struts was released around 2000 (now we have much more choices regarding Java Web frameworks). Since then they could've migrated to Struts 2, and after that for example migrate to Spring MVC, or even stick with Struts 2.
"With the release of ColdFusion MX 6.0, the engine had been re-written in Java and supported its own runtime environment, which was easily replaced through its configuration options with the runtime environment from Sun. Version 6.1 included the ability to code and debug Shockwave Flash." 
What concerns me is that the would have been perfectly happy to sweep most of this under the rug if Ryan hadn't forced their hand.
The security lesson is one thing (hacks happen). I'm far more concerned with the lesson in transparency, as that speaks to the character of the company.
That's a rather cynical thing to say. There's no proof that without Ryan's chat log, Linode wouldn't have told us anything.
The 'proof' is that the announcement came in very short order after Ryan's comments, and in the past, Linode has _never_ made such announcements (despite serious security breaches)
The ISP 'grave yard' is full of companies that tried to roll their own billing system in PHP.
You're right about it being an odd choice for a Linux company, though.
This latest update dances around the issue and doesn't go far enough to explain what happened, whether or not your data is at risk, and what is being done to rectify it. If you are a Linode customer reading this, there is no indication as to whether or not you should cancel your credit card number. You, the customer, are left to decide on your own without all the information.
Does Linode have an obligation to advise their customers to request new card numbers? It doesn't look as if they card information was stored in compliance with PCI-DSS and their systems were breached and private information was compromised. Is "the private key was also compromised but we don't think they can guess it, so let's just move on" good enough?
Customers shouldn't have to read between the lines on security updates like this. I expect the companies I deal with to be up front and transparent, even if it makes them look bad. The reality here is that they messed up, not because they didn't patch a week-old exploit soon enough, but because of the poor security practices they had in place internally that allowed the hacker to get the information they did, and by not being up front with their customers to begin with.
Any company can tell you that they take security seriously and that their customers best interests are their highest priority, but what ultimately matters are your actions. What are your security policies? What is your company culture toward security? How quickly do you announce problems and how transparent are you in those announcements?
Why do you say that?
The requirements are basically to use strong encryption, only store those encrypted numbers on a system not accessible from the public internet, have auditable access controls and logs, and have a procedure for keeping systems up-to-date. The last 4 digits of the card are not subject to the same requirements as the full account number. All of that doesn't prevent the scenario that happened playing out -- someone gets into an internet-connected server, then can connect from there to the database server with the encrypted credit card numbers.
07:52 < HTP> the CCrypter class of the linode application
context was accessable from outside the wwwroot using
undocumented ColdFusion methods. i was fully able to
decrypt the ccs using the in-memory privkey that they
supplied the password for.
Linode has two servers. One is a web server that runs their website and contains its source code. One is a database server containing customer information and encrypted credit card numbers. The web server is connected to the internet. The database server is not. They communicate over the private network.
The web server is compromised. The attacker has the source code, which includes the IP/host and login credentials for the database server, since the website code talks to the database to function. The attacker can now access the encrypted credit card numbers in the database by connecting from the web server.
At no point are the encrypted numbers stored on an internet-facing machine, but the attacker now has access to them. There is no evidence that says otherwise. There's no way to build this that doesn't have a path from the internet to the card numbers either, since eventually they need to get onto an internet-connected machine to talk to the payment gateway and actually charge those cards.
I have to disagree.
Web server has write-only access to a remote database table on the accounts/billing server, writes account change requests; accounts/billing server polls table, matches account numbers to billing information, submits charge requests to payment gateway via a firewall that allows access to only a single IP or set of IPs; accounts/billing server writes charge result back to another table; web server has read-only access to the results table and subsequently changes the user's account type or sends out an email or performs other administrative duties.
That's how I'd build it, I've done similar stuff before. (I have a paranoid business client.)
The nice thing about this is that you can capture credit card information on the web server, encrypt it with the public key, and then decrypt it using the private key which is only on the accounts/billing server.
Getting unauthorized remote access to the accounts/billing server would be a challenge. If they needed authorized remote access to it for customer service reasons on ports other than ssh, I'd use authpf on the pf firewall sitting between the accounts/billing server and the rest of the network, along with a unique password-protected ssh key for each employee. But, really, 1 or 2 sysadmins are the only people who should have non-console access to it.
> The web server is compromised. The attacker has the
> source code, which includes the IP/host and login
> credentials for the database server, since the website
> code talks to the database to function.
There is no reason for a properly-designed system to be storing passwords or credit card numbers in the same database as the rest of the user's data. Such security-critical data should always be placed on a hardened machine, with the web server communicating via a small and easily-auditable interface.
... isn't that called 'a network'? This seems to contradict your earlier statement about 'machine B should be considered internet-facing.'
A typical example of such an interface would expose about four operations:
// Return true if the given user/password pair is valid.
bool PasswordValid(string user, string password)
// Return true if the password was changed successfully.
bool ChangePassword(string user, string old_password, string new_password)
// Returns a reset token.
string RequestPasswordReset(string user)
// Returns true if the password was reset successfully.
bool CompletePasswordReset(string user, string token, string new_password)
Credit card data would have a similar interface -- add card, remove card, list user's cards without full cc#, charge card.
The advantages of this over the sort of "hashed passwords in MySQL" design are obvious. Not only is sensitive data protected against "SELECT *", but it's now possible to apply security policies to password management. For example, the password service might enforce rate limiting on how often a particular user's password can be checked -- that way, even if the web server is compromised, the attacker will be unable to compromise passwords any faster than they could via the standard login screen.
The password machine would typically be configured to have only the password service and SSH running, with SSH access limited to a key that's stored in a safe somewhere and used only for emergencies.
If a developer doesn't feel comfortable building such a system themselves (reasonable), then there are many commercial products available. They're expensive for a home user or ramen-budget startup, but a company like Linode would be able to afford one easily.
Additionally, I object to your implication that any network service can be trivially compromised with "one more round of cracking". This is a view of software security that is propagated by science-fiction movies, and is not grounded in real life. Vulnerabilities that permit remote code execution are relatively easy to prevent by taking appropriate precautions when designing and implementing a network service, and almost every such vulnerability has its root cause in doing something obviously insecure (e.g. writing in C/C++ or a language with eval(), passing user input into an OS procedure).
You have to take that a step futher... "we don't _thing_ they can guess is, and we don't _think_ they were able to recover it from RAM"
All this update does is restore my faith in their ability to store my information correctly. It does nothing to reassure me that they won't try to cover anything up again.
You could see this as a cover up or you could see this as a disclosure from the crackers to Linode. I see it as a cover up, since Linode should tell us anyway that servers were broken into, but I could see how it could have been seen as reasonable disclosure.
In any case, if what ryan* is saying is factual, then Linode reneged on the deal, a shady one in the first place, HTP has credit card hashes and evidence that Linode stored Lish passwords in plain text (a log maybe?), and the only thing separating them from credit card numbers is the passphrase on the private key, which is hopefully a strong one.
This is why you build security in layers. If one layer gets broken, there are other layers to protect you.
I'm not underestimating the severity of this incident, but I think it's good to see this layered approach being used by Linode. If the passphrase was strong enough, and not stored anywhere, then I doubt HTP would be able to gain access to the private key, even if they got the actual private key file.
> @Eivind – our private key is stored only in encrypted format. The passphrase is not guessable, sufficiently long and complex, not based on dictionary words, and not stored anywhere but in our heads.
So it's short enough to remember and likely has some sort of pattern. There's a limit to what a person can remember, lower if there are several people that have to remember it.
To memorize, just copy it into your favorite text editor, then type it 25 times in a row and delete. If you are paranoid, make sure you use a text editor that does not store temp files. Do not save this password anywhere. Set a timer and do it again an hour later, then again the next day. 10 minutes of your time and you have a password in your head. I can keep 10-20 of these at a time, maybe more since I seem to be able to type older ones from years ago.
I don't consider myself to have a great memory. I can barely remember lyrics to songs I've listened to dozens of times and it takes me hours and hours to memorize lines for plays. But I started doing this for passwords ten years ago and it is very reliable.
But for x many customer credit card details you're really looking for a much longer password that that. I'm talking 64 characters or more of pure random data.
You shouldn't be compromising for the convenience of being able to remember a password when it secures such critical data in my opinion.
Edit: I do agree though that your method is a very good way of remembering password.
If I was responsible for this key I might increase from my normal 15 to 20 characters, giving me more than 120 bits of entropy, and I would expect to be safe from offline brute force for decades, and I could remember it.
Being realistic, to expect someone to type in such a long password regardless of if they can remember it or not is clearly unreasonable.
That and the fact that an offline attack can be run on this key is not promising.
Edit: Theses "making an assumption" arguments are silly. It is good practice to assume the worst case, to assume the best in this situation is bad.
How is your faith in that regard restored, when they just said they stored passwords in plain text? They didn't store passwords in plain text by accident, they chose to. They knew the risks and didn't care.
Sounds like it's not stored in plaintext.
"There were occurrences of Lish passwords in clear text in our database. We have corrected this issue and have invalidated all affected Lish passwords effective immediately."
Presumably, they know how many customers are affected, because they invalidated their passwords. They didn't tell us.
Why do you think companies can only do one thing at a time ? And where have you worked where software engineers are drafting press releases or conducting security audits ? And why shouldn't keeping customers (who have their own apps) informed be their top priority ?
Why do you think companies can only do one thing at a time?
And where have you worked where software engineers are
drafting press releases or conducting security audits?
All it takes is one debug log statement to leak through from development and you can be logging plain-text passwords on every login.
Also the social engineering implications could be huge. Many people use the same password across various sites so in theory there could be a lot of compromised email accounts. Which could then mean compromised internet banking sites, trading sites etc.
Nevertheless, this isn't the first time Linode has needed to spend their benefit-of-the-doubt points.
Also, access to LISH even without a root password provides access to some scrollback output, which could expose sensitive information.
Also, LISH allows multiple connections, all of which see the same console, so the attacker can just connect to it and wait for a root prompt to appear when root logs in next. (Does changing the LISH password prevent this attack if they're already connected? I doubt it.)
Also, most distribution boot processes can be messed with at boot from the console. For example, you can ctrl-c to stop important daemons from loading. In some cases you may be able to get a shell without the root password.
Also, Magic SysRq can be accessed over a serial console by sending a BRK. You do not need to be logged in to do this, and it could be used to kill processes, reboot, etc. I don't know if LISH allows sending BRK.
They've now reset all the passwords and fixed the bug that meant some of them were being stored without encryption. So what we're saying is there was window of a week or so where only LISH users affected by the clear text bug may have been open to an attack if they happened to use LISH during this time frame and the attacker was targeting them. Not great, I agree, but could have been worse.
Hopefully, they'll change their setup going forward so that each LISH connection to the same VM gets its own console.
The connection is to the same TTY when I log in via the web terminal and SSH at the same time, so if you are logged in via LISH anyone with the LISH password has access to the logged in user's console + scrollback.
I wasn't able to send BRK from Putty though.
It doesn't appear that you can reboot or enter single user mode from the LISH console login prompt.
I think that this is what's most criminal about the whole event. It's probably safe to assume that "there were occurances of Lish passwords in clear text..." is a euphemism for "we stored all Lish passwords in cleartext."
Simply invalidating these is probably not enough. It's very likely that people reused their root login passwords as their Lish passwords, meaning that Linodes running without firewalled sshd are potentially vulnerable.
I hope Linode communicated who had "the occurrences" of compromised cleartext Lish passwords, so that those users could take appropriate action.
This borders on libelous, in my opinion. If all Lish passwords had been stored in the clear, I think they would have said that. They've been pretty specific in the rest of the update.
They say they have "invalidated all affected Lish passwords effective immediately". I just logged in to Lish via SSH using the new password I had set on Friday, so I guess mine at least wasn't one of the "occurrences"....
Lish passwords in the clear might have been in support tickets stored in the same database, or chat logs.
Also: "It's very likely that people reused their root login passwords as their Lish passwords..."
Really? I guess it's never a good idea to doubt people's capacity for stupidity, but this seems very obviously a bad idea. Hopefully this isn't so common as to be "very likely".
> Credit card numbers in our database are stored in encrypted format, using public and private key encryption. The private key is itself encrypted with passphrase encryption and the complex passphrase is not stored electronically.
By not providing further clarification that the private key was stored outside that compromised host in that update, you can read between the lines that they are trying hard not to address that particular issue.
I really don't want to assume that the attacker can now brute force the private key's passphrase, and it might not take long if that "complex" passphrase was a word out of /usr/share/dict/words.
Or worse, that the billing process was monitored via the compromised network, RAM, or a key logger - to get the unencrypted private key or it's passphrase.
How much of your time spent chasing down and changing a CC number is worth a maximal risk of $500 with a vero low probability of occurring?
EDIT: Thanks to nenolod for the correction!
I'm pretty sure you're off by a factor of ten. If someone makes fraudulent charges with your credit card, your maximum liability is $50, and you'll only have to pay that if the charges occur after you've reported the card stolen. (There are different rules for debit cards, that doesn't apply here.)
Everyone should go reset their virtual machine's root password if they reused them.
This comment is the last dead comment: (https://news.ycombinator.com/item?id=5399305)
- Any mention of the claim that they had made a deal with the hackers. No attempt to deny the claim.
- Any mention of when the systems were compromised or when they discovered that they were compromised. The post starts with "Yesterday," mentions working "around the clock" and mentions Friday, giving the impression that this is all very recent, when according to the hacker Linode has known for two weeks.
I am particularly interested to find out whether linode did enter into a deal with the hackers.
Many companies should learn the lesson from Microsoft's huge shift years ago when they implemented the Trustworthy Computing Initiative. They have much larger targets (an operating system, a hypervisor, a database server, a full office suite, etc.) compared to a browser plugin or a document reader, and in my experience they're doing a far better job at keeping security at an acceptable level.
Comparing IIS+ASP.NET security versus some frameworks such as Rails (another huge source of vulnerable web applications) is also enlightening.
Because PDF was never meant to be a static format, and Adobe Reader was never meant to be a static document reader.
However, since most PDFs are static anyway, I think it would be a very good idea for PDF reader softwares to treat every PDF as a static document by default, and only allow scripting after the user clicks Yes on a scary-looking warning.
I'm not a proponent of this approach, but it is what it is and it is the reason that Reader is such a bloated security-ridden nightmare.
"This activity appears to have been a coordinated attempt to access the account of one of our customers. This customer is aware of this activity and we have determined its extent and impact. We have found no evidence that any Linode data of any other customer was accessed. In addition, we have found no evidence that payment information of any customer was accessed. ... Out of an abundance of caution, however, we have decided to implement a Linode Manager password reset,"
and their disclosure today. I find that I'm uncertain whether they would've released today's disclosure if it weren't for the IRC chat log getting passed around online.
I don't get why they thought it would be prudent to reset everyone's manager passwords "out of an abundance of caution", but didn't bother to mention the possible leak of payment information. We don't have any information on the strength of the passphrase used (nor can they provide any, of course); we have no idea how feasible it might be to brute-force it.
Even if the full credit card numbers are never broken, the last four digits are still available in the clear. That has been enough to give unauthorized persons access to accounts on other services in the past (see the Matt Honan thing from last year: http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-hona...). Depending on what other information was stored in the database, customers may be at risk of having accounts compromised elsewhere using the billing information from Linode. Linode really should have included that information in Friday's announcement.
Also, ColdFusion has had 103 different CVEs going back to 1999 (http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=coldfusion). Adobe doesn't really have a sterling record when it comes to this stuff. A nice example is the recent CVE-2013-0632 (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0632), a remote code execution exploit that was discovered already in use back on January 4th but didn't have a fix available until January 15th. Linode -- and their customers -- are a really big, juicy target for future CF vulnerabilities.
"our private key is stored only in encrypted format. The passphrase is not guessable, sufficiently long and complex, not based on dictionary words, and not stored anywhere but in our heads." 
Also, the post says salted sha256 passwords are useless. Evidently, they do not have a clue how to safely store passwords, because that is not useless at all.
Finally, it's an outright lie that systems were not accessible, because we already know that they hacked seclists.org/nmap. Even their previous post acknowledged this.
Although I still wonder about how they run recurring charges.
I'm really trying to avoid speculation at this point -- we simply don't have enough information to make smart guesses -- but I think we can reasonably conclude that nobody at Linode was manually keying in a password every time there was a charge to a customer card, and therefore it's not insanely paranoid to cancel any cards associated with Linode.
Really the most annoying thing here is Linode's obtuseness about the attack and their processes and what's at risk, and if I end up moving to another service (as at the moment I'm inclined to), that will be the reason -- not because they got compromised, but because they valued their own butts over their customers'.
They're a VPS service for chrissake. They could explain in very technical terms exactly how manager passwords were stored and how the credit card information was stored & accessed and what other billing information may have been compromised -- a full disclosure -- and most of their customers would not only understand most of it, but they would appreciate it.
Sorry, I kinda went off on a tangent there, but the more I think about how this has been handled, the less I feel like I can trust them.
But jokes aside, there is a huge gap between the strongest and weakest algorithms used to passphrase protect a private key. I wish they gave us more information about the certificates.
Including: is it ONE private certificate for everyone? Or one per client/customer? How was the key pair generated? When was the key pair generated? Which algorithms does it utilise and at what key length?
Where do you get that from? They encrypt all of the CC info using the public key. You need the private key AND the private key passphrase -- and one of them isn't stored on a computer.
The article and the IRC log.
The IRC dude said flat out that the "bad guys" have the private key and the encrypted data. They also have made no claims that turned out to be untrue yet.
The Blog post infers that the bad guys have the private key by talking about passphrases. Why are passphrases on the private key even a discussion topic unless the private key has fallen into "enemy" hands?
Therefore the ONLY thing between them and unencrypted data is the passphrase protecting the private key.
If we assume that the key is used on that machine, and is password protected, and is used automatically, then the unencrypted key, or its password, must exist in memory, if not on disk. Perhaps they log in when the machine boots and manually enter the password once, so it is cached and can be used for repeat decryption processes.
If the key is in memory, then a blackhat who knows what he's doing can get access to it.
Linode layered pretty well: Hashed and salted passwords, isolated systems, encrypted CC database and passphrase protected decryption key with no written record of what that passphrase is. They didn't do a perfect job but then who does (or even can)?
Hopefully, they used 4096 bits and a random >20 char string as the passphrase. That combined with a standard cipher algorithm (DSA, RSA) should be enough to keep us safe.
I will concede that this is highly likely, but they did not confirm that the private key was taken.
The IRC log said that they had taken it and since Linode is talking about the passphrase in their blog post, the implication there is that it was taken too.
If the bad guys didn't have the private key then the passphrase is literally irrelevant to the discussion. Since they would have to break the primary cryptographic key rather than just the passphrase.
A passphrase protects a private key file NOT the data. So why would Linode even bring it up?
I would bring it when discussing how I cryptographically secured sensitive data. Especially, when people might be confused and try to argue that they should have used pbkdf2 and not encrypted the information (because they were thinking about password hashes).
Bluff: Saying one thing, doing the opposite ("our passwords are really long" (actually, they are short).
Double bluff: Saying one thing, and actually doing it, as everyone will assume we are bluffing. "Our passwords are really long, honestlyl! (and they actually are, but bad guys will assume we are bluffing and using a short password).
I might be biased though as my default approach to most things is to do (and have control over) as much as possible yourself unless there are extremely good, unequivocal reasons not to.
Sorry to be morbid, but... there's also rubber hose cryptanalysis. Whether the payoff or actors involved here make that a legit concern I don't know.
I'm glad they posted a more detailed update regarding this issue publicly.
I really do wonder at what point people are going to realise that they aren't dealing with a company who cares about their interests.
The passphrase may not be stored in the database but presumably the decryption key is kept in memory by the app server while running. Otherwise it wouldn't be able to charge anything. And if the hackers could modify the ColdFusion source code, they can get to anything the app server can, including the decryption key. ColdFusion uses an interpreted language so injecting code is possible with just filesystem access.
Linode says there's "no evidence" of decrypted card numbers but I think it's more a matter of no evidence.. yet. They also call salted hashes "useless" to hackers which betrays a rather unenlightened perspective on infosec.
It's all a very confusing world where nobody has a definite answer. Just 2 cents, don't act like I'm sharing a fact or anything
The hacker won't be able to take your CC number for themselves in this case, but they'll sure be able to put spurious/malicious charges on your account.
I suppose this wild speculation isn't helpful or conducive and waiting on more information might be a better idea. I think the reputation damage has already been done, judging by the comments on the previous post though.
Another possibility is that the passphrase is entered when the machine is booted, and then the private key (not the passphrase) is loaded into memory. This is how e.g. ssh-agent works. When this mechanism is used, you don't get access to the private key itself without direct memory access (which is possible, but not that easy to carry out, and would typically require root access and debug tools etc). However, the private key can be then used to carry out crypto operations.
From the IRC chat logs, it looks like ryan got the private key file, but he didn't get the private key itself. At least he didn't seem to provide any solid proof to suggest he did.
> However, the private key can be then used to carry out crypto operations.
Surely this is of concern? According to the logs they'd known about the breach for a week - that seems like enough time to decrypt each card one by one and exfiltrate them.
Thinking about it though, that's basically the role of the key passphrase.
That's why this line, that most SSL providers give in their instructions, is so dangerous:
openssl rsa -in server.key.org -out server.key
> The private key is itself encrypted with passphrase encryption and the passphrase is not stored electronically
So even if the hacker bragged about obtaining both the public and private keys, the private key was protected and therefore unusable without the passphrase.
At this point, we can all hope linode used a very complex passphrase... My gut feeling is that they did.
Once the attacker has root, they can dump the memory and access this information. The attacker claims to have gigabytes of data, which could well include a memory image (or swap partition image, if the program that stored this in memory didn't lock the memory to prevent it being swapped out).
> 07:52 < HTP> the CCrypter class of the linode application context was accessable from outside the wwwroot using undocumented ColdFusion methods. i was fully able to decrypt the ccs using the in-memory privkey that they supplied the password for.
> 07:53 < HTP> coldfusion runs as a single process, and its memory can be accessed using the ColdFusion wrapper
The CC numbers were encrypted using a public key. The corresponding private key was itself encrypted using "[a] complex passphrase [that] is not stored electronically". They could be lying, but if they are not then this is perfectly valid security.
Using this scheme, you can store new CC numbers any time you want using the public key. You'd have a regular billing cycle(say a month) with billings happening in a short time window(All my linode accounts get billed within minutes of each other so I think this is the case for linode). When you want to bill your customers you'd input the passphrase which wouldn't be stored anywhere, would be used to temporarily decrypt the CC numbers and bill everyone. Unless you are attacked in that window(or an attacker gains access before that window and can remain unnoticed until it happens, and even then they'd need root access to read the memory and not just access to the filesystem) then the CC numbers are pretty safe against any attack that leaks files/databases/etc...
I've had account changes billed to my card within minutes, even (and especially) in the middle of the night (when I do most of that stuff). I'm having a little bit of trouble imagining that someone's sitting in front of a terminal and typing a passphrase into a prompt every three to five minutes, all day long.
An encoded filesystem doesn't mean the sysadmin types in the passphrase every time something hits an "open()" call, its (usually) just once at boot time. Or sudo can be configured so you get to authenticate every hour not every time you run a command.
Also to be honest I've never had a problem with linode, combined with I've heard they've got pretty good support, so if there is a dude on duty at 3am he probably has nothing to do other than type in a password. So that would be an alternative, simple architecture. Now where it gets hard to justify is when there's enough traffic to pay a dude to sit there typing in the password all day, but they're apparently not that big.
(edited to add I hope they didn't actually use encfs and/or sudo, I was using them as an example of a user interface not as the final answer for how it would be implemented.)
They may've changed their billing system in the last 12+ months.
If it's a strong, long non-dictionary passphrase, then I'm not nervous. But, we don't really have any idea if it is or not. The fact that their web interface boxes had the private key on them is a huge concern - you'd ideally run all billing stuff on a machine that is highly locked down, perhaps not even accessible outside of a very strict range of IP addresses. The practice of leaving the private keys hanging around on the same boxes that ran the website makes me awfully nervous about the other links in the chain.
At that point, it really depends on how strong their passphrase is. If it's a dictionary word, we're pretty much fucked.
Having the private key in the web frontend sounds like a bad idea (as this incident has proved), they should use a backend system that is not accessible from the Internet. I don't know how they system works but the web frontend needs to encrypt CC numbers but I'm not sure it is required to decrypt them as that could be performed in a more restricted environment.
Security incidents suck, but the good news is that they now have a chance to improve their security.
Personally, I think Linode didn't mess this up as badly as many other companies. The statement took a while to appear but it's there.
As a Linode customer myself I can neither report anything unusual on my box, nor activity on my CC. Thus far I'm somehow inclined to believe Linode when they say no CC information has been compromised. Either way, they are offering a great service and I'll stick with them. Worst case, I'll have to get a new CC.
Most likely not even related
There were 11 million american victims of identity theft in 2011 (outta 400 million) for a ratio of about 1 in 40 of the general population gets p0wned per year. Obviously not all ID theft is CC theft but I'd guess a lot of it is. Based on friends and family experience getting a CC number stolen every four decades of use passes the smell test so I will go with that.
A quote from prweb.com found by google: "Started in 2003, Linode has grown to over 45,000 customers". I wonder what year that quote is from. Still "forty five thousand" is probably not totally ridiculous. It makes sense looking at their hostname scheme, ip space allocations as per whois, and some educated guesses. Lots of people including myself like linode.
So assuming the average linode customer is the same as the average joe6pack then 45000/40/365 = about 3 linode customers should get p0wned per day regardless of any linode problem. To make my inner EE happy I'll call the "noise level" about 3/day and evaluate the SNR based on that noise level to see if there's a signal of p0wnership.
Obviously the two reports over the course of a couple days is not 100% of all linode customers who got p0wned. But it does show that two reports doesn't really prove anything.
Now, as a made up example, 900 reports over 3 days would be a HUGE indicator "something" happened. But at a predicted noise level of about ten or so over three days, two reported is down there in or below the noise. So, as a long term linode customer I'm not freaking out (yet). My CC does email me every time a charge is made, and nothing weird has been seen. All quiet on the western front.
The fraud won't occur till the database is released and the private key is cracked.
On a side note, I am not sure of the "some occurrences of plaintext Lish passwords". Seems like quite a goofup on Linode's part.
"... previously unknown zero-day vulnerability in Adobe’s ColdFusion application server. The vulnerabilities have only recently been addressed in Adobe’s APSB13-10 hotfix (CVE-2013-1387 and CVE-2013-1388) which was released less than a week ago."
To me that reads like 'it was a former zero-day exploit", as all exploits are.
seclists.org was one of the sites hit in the hack. See how they were down "over the last week". Also note that the Adobe security bulletin was release on April 9: http://www.adobe.com/support/security/bulletins/apsb13-10.ht...
So ya, these hackers found the problem (a 0-day in Coldfusion), used it against Linode. then Linode got Adobe to look into it and found the cause.
It is completely and utterly inexcusable.
Yes, but the whole private key thing and the ability to access the cc numbers from the front-end (unless they hacked the DB server as well, I think they didn't?) reeks of failure.
The only thing standing between the plain text cc numbers and the hackers is the passphrase, better pray that it's a good one.
There are a thousands of very small SAAS services out there that bill monthly which I assume would take advantage of gateway stored credit card info, then you find out that a comparatively larger more trustworthy player like Linode doesn't.
Similarly it would be nice to also have them declare if they use something like Braintree's transparent redirect such that they never even see the card numbers.
I really don't see the need for that at all. What sort of credit card "lookup" are they doing exactly?
I'm not arguing, I'd love to read it! :)
It looks like seclists.org and nmap.org are hosted on different IP addresses, but both route to linode, and both have the same WHOIS information, so I think it's safe to say the writeup applies to both.
By the sounds of their writeup, the attacks just visited the "highest profile" sites as per the Quora link to obfuscate their real attack.
Having said that, if the API were provided via ColdFusion it wouldn't have helped anyway.
The whole system has now been rewritten in dotNET. It would be interesting to see if I can find that old CF app. It ran solid for almost ten years on the same 3 NT4 servers handling tens of thousands of users per day. And it could have probably run on one server except that we isolated the commerce and MSSQL functions from the front-end app.