Hacker News new | past | comments | ask | show | jobs | submit login
Security incident update (linode.com)
318 points by asb on Apr 16, 2013 | hide | past | web | favorite | 272 comments

I've canceled the card I used with Linode, the hacker may or may not have got access to the card number, but given the level of uncertainty I'd just rather cancel my card and get a new one reissued. I imagine my bank feels the same way.

Remember that depending on where you're based / your agreement with your bank that you might lose any standard "protection" for any fraudulent transactions that occur after you learn your cards been compromised.

This should be the top comment and the reason why companies should disclose this information on time and more often. Just like you would block your card if you get robbed, you should block your card if someone steals your cc data.

I did the same thing. I had to read HN to even have a clue about this as their email correspondence/site gave no indication that this was even a possibility.

I would assume a large majority of their clients don't read HN and would have no line of sight to do the same.

It's unfortunate that this happened, but I'll be looking for another host in the near future.

Agreed. As soon as I heard about this, I called my bank and canceled the card (no suspicious activity on it) and then regenerated 32 character passwords for all of my logins associated with Linode. An ounce of prevention and all that.

And for exactly that reason I've had my credit card reissued as well, even though no fraudulent charges have been made. The risk is just too great for me to feel safe with the old credit card number.

Edit: Fixed spelling mistake.

The one thing that puzzles me above anything else about this whole thing is: "Linode uses ColdFusion".

I mean, you have this Linux company that's pretty much an open source champion to us Linux guys and then... Adobe ColdFusion. Really?

I know nothing about ColdFusion, don't get me wrong, I just find it an incredibly odd choice for a Linux company.

That having been said, I think Linode just learnt a really unpleasant lesson and will become an even more secure provider for it. I believe that moving my sites away from them at this moment would be like selling stocks when they're about to go up.

Linode is a Linux company, but it's also a company founded in 2002-3. Back then, there weren't many great options for web programming and ColdFusion would have made a degree of sense (PHP 4.0 was released in 2000, and Ruby on Rails 1.0 didn't appear until 2004; Django was released in 2005). A modern, reasonably-secure web stack in those days really did mean "PHP with register_globals=Off". (Facebook and YouTube were both created later, and in PHP).

It's a bit surprising that they haven't moved away from CF, but this is a good example of how hard it can be to transition away from legacy technologies. It's also not clear that security would have been improved by adopting, say, Rails, given that there has been no shortage in security issues there either.

Even back in 2002, the writing was on the wall for ColdFusion as the costs went up and many bugs[1] had gone unfixed for multiple major expensive releases during the legacy-customer milking phase of fading out of the market. Forget PHP4, even PHP 3 was a big step up in features, performance (e.g. simply issuing a database query was over an order of magnitude faster!), reliability and even security – it was easy to find advice for fixing the risky defaults and libraries for common tasks like sanitization.

My guess is that this was either licensed or not considered a core feature and handed off to someone who was already invested in Cold Fusion.

1. e.g. nested loops were broken so for i: for j would repeat j[0] i times rather than correctly iterating the inner loop. The app server tended to leak memory and crash, particularly on non-Windows platforms, and the automatic restart mechanism was completely broken.

Also, if it works for them, it may not be financially viable to rewrite all of their code. For example, I work for a large company that uses Perl for all the backend stuff on our website. If we were to change to a more "modern" stack, we would have to stop and focus on that for a period of time instead of pushing the company forward and improving the customer experience etc.

In the meantime, Perl works for us. It's not cool or sexy but it gets the job done. Probably a similar situation at Linode.

> In the meantime, Perl works for us. It's not cool or sexy > but it gets the job done.

What is not cool about Perl? You don't need to transition to a new framework outside of Perl. You can upgrade your backend by investing your time in rewriting the Perl code to newer standards.

    > What is not cool about Perl?
Almost everything is not cool about Perl. Perl felt crufty and ancient even when I was learning it fifteen years ago. It's only gotten weirder, cruftier, and more ancient since then.

I dunno, it's been nearly as long since I learned Perl (or used it much) but the stuff I've seen out of the "modern Perl" movement (http://onyxneon.com/books/modern_perl/index.html) looks a lot cleaner and more, well, modern than the Perl I used to live with. You can still write "crufty and ancient" Perl, in other words, but you don't have to.

It's hard to find good Perl developers and even harder to run multi-language libraries for the same online portal (ie same site, domain, database, hosting environment: hardware, OSs....and so on).

I disagree. First, it's hard to find good developers, period. S

econd, in general, good developers are hard to find by design (e.g., to avoid recruiters, but also because they are most tof the time already doing interesting work).

Lastly, yes, it's hard to find good Perl programmers to do boring work, under boring conditions, at a boring wage. But then that's true of all good developers. Try to find a Rails programmer to remediate crappy code (I've tried).

You're missing my point, PHP developers that know how to build websites are available ten to the dozen. But Perl most of the Perl developers we've tried to recruit over the last 18 months are either completely clueless at writing RESTful APIs or just terrible developers (sometimes even both).

We've had far more success at hiring good PHP developers and teaching them Perl then we have had at finding the Perl developers that can write web technologies.

But this might just be a facet of our location (a suburb that's more than an hour away from any city), so all the developers of "niche" languages (such as Perl has sadly become) tend to get drawn to the city and higher wages. Where as those who are still good developers but not as money driven tend to pick the more popular languages because they'll pick the work in the area they want to work. (that's probably an unfair generalization statement, but it's true for the local area where I live).

Very cool! Build vs. buy, you're building, everyone wins (plus you're a great employer for training your devs).

What's your product or service? Now Perl programmers are now seeking -you- out :)

And I'm sure you already know about the YAPC job fair: http://www.yapcna.org/yn2013/job-fair.html

Thanks for the YAPC link. We're UK based though, so quite a way away from Texas. :(

We do online games (amongst other things). Though I don't want to go into too many details as I try to keep a degree of anonymity online.

> What is not cool about Perl?

Perl is my first interpreted language. It has a very special place in my heart, even if it was awkward at first to get used to it coming from a C background. Perl will always be a cool kid.

I think it's interesting when people tie sentiment to coding choices. As previously stated, I could never wrap my head around the horrid syntax and the feeling that if you didn't know it from the beginning it was going to be a long, hard trek in doing so.

As for 'cool' I'm not so sure. Perl will be a niche sandbox for quite a while, but most run-ins with Perl (as of late) have been trying to remove it for something else. I think choosing a more accepted language has legs for a variety of reasons including code portability and long-term maintainability, both stemming from the fact that if there's only one guy maintaining all of the Perl code then that's a very-bad-thing.

But, that's just me. To wax nostalgic on Perl and use that as a basis for use-case seems, at least somewhat, reckless.

Perl was Ruby before Ruby was around to be the ugly cool kid.

I think most of the time it's never got anything to do with financial considerations, it's more just does anyone feel strongly enough about it to rewrite it in a fit of pique/on the sly/in a slow month and then on top of that have the power/influence/charm to push it through. No manager will ever say 'this non-essential system needs re-writing in a modern way'.

Rewrites where you're already intimate with the code base don't take that long for non-core stuff. I assume it's the web interface that's written in CF (which you can see from viewing the source in their signup and login pages). I guess you could argue it's core to their conversion process, but I mean their core skills are servers and their maintenance rather than web development so they probably actually haven't spent that many man hours on it.

>Facebook and YouTube were both created later, and in PHP

youtube was always done in python as far as I understand. http://www.youtube.com/watch?v=G-lGCC4KKok

But yeah, back in 2002-3 coldfusion was an option.

Youtube pre-Google was PHP

Youtube pre-Google was still Python, from the way several Youtubers have told it to be. Maybe an initial prototype was PHP, but not what was running when it was purchased.

And trust me, there is no way Google ever would have said "rewrite it in Python." They would have said "rewrite it in Java," like every other frontend they run (Gmail, Calendar, Docs, etc).

my primary reference that it has always been python is this video from 2007 which was only shortly after they were acquired.


An option much like washing your feet in boiling sulphuric acid is a way of getting sand out from between your toes.

When something makes PHP look like a bastion of sanity and competence, you really should run a million miles in the other direction.

With all of the bugs/costs associated with CF, it's strange to think of how many huge web properties made use of it (MySpace comes to mind), but the world was a very different place back then. As you mentioned, the choices were far fewer and I think the comfort level surrounding open source also played a role. In addition...

1) It was most common then to go the cgi-bin route for server side, which wasn't pleasant. CF offered an alternative to that very early on and made webdev more "accessible".

2) ColdFusion was baked into a lot of GUI based web design tools (I can only think of the name of one, NetObjects Fusion, because I rarely used those sorts of tools). Even though they probably weren't using a GUI based tool, being baked into several meant that peoples' exposure to CF was pretty common. I did absolutely nothing with ColdFusion, but routinely ran into it with freelance work I did.

3) In a large "enterprise" shops, I remember losing a bid because a manager Google(tm)'ed PHP, saw that it was originally stood for "personal home page" and told me they needed to hire someone more professional (thank God I dodged that bullet). The comfort with the current offering of open source web programming languages, in general, made managers uncomfortable for various FUD reasons. Case in point, I remember on the Windows side CF jobs getting eclipsed, nearly overnight, by RFPs for ASP in VBScript. ASP, after all, was backed by Microsoft, commercial, and effectively free (included with their Windows NT/2000 Servers).

There was a heck of a lot of Java out there used for web development around that time. But your point still stands - Java probably would've been a lot more difficult to setup and maintain than CF.

I beg to differ. If it was a Java based solution maybe they would have migrated more easily.

For example, Struts was released around 2000 (now we have much more choices regarding Java Web frameworks). Since then they could've migrated to Struts 2, and after that for example migrate to Spring MVC, or even stick with Struts 2.

Although if I remember rightly it wasn't in 2004, isn't CF now Java-powered under the surface?

Yes, ColdFusion is now Java-powered:

"With the release of ColdFusion MX 6.0, the engine had been re-written in Java and supported its own runtime environment, which was easily replaced through its configuration options with the runtime environment from Sun. Version 6.1 included the ability to code and debug Shockwave Flash." [0]

[0] http://en.wikipedia.org/wiki/Adobe_ColdFusion#Other_features

>That having been said, I think Linode just learnt a really unpleasant lesson and will become an even more secure provider for it. I believe that moving my sites away from them at this moment would be like selling stocks when they're about to go up.

What concerns me is that the would have been perfectly happy to sweep most of this under the rug if Ryan hadn't forced their hand.

The security lesson is one thing (hacks happen). I'm far more concerned with the lesson in transparency, as that speaks to the character of the company.

Don't assume that the most transparent company out there isn't choosy about what it's being transparent about. There are plenty of things we never know that don't hurt us. I choose not to concern myself with such things.

The problem is when you don't know things that could hurt you. "Your credit card was stolen" is something that you need to know. "Your password was compromised" is something that you need to know. "Let's not tell anyone about it, and hope that this information isn't abused" is not a valid course of action for a company that cares about their customers.

The problem is that Linode is not anywhere near the "most transparent company out there."

> What concerns me is that the would have been perfectly happy to sweep most of this under the rug if Ryan hadn't forced their hand.

That's a rather cynical thing to say. There's no proof that without Ryan's chat log, Linode wouldn't have told us anything.

I'm a long time Linode customer. I love the service they provide. I hate their lack of transparency.

The 'proof' is that the announcement came in very short order after Ryan's comments, and in the past, Linode has _never_ made such announcements (despite serious security breaches)

E commerce is expensive to develop and ISPs operate on razor thin profit margins. Most ISPs grow organically, so their code-base is older.

The ISP 'grave yard' is full of companies that tried to roll their own billing system in PHP.

Perhaps they simply enjoy using ColdFusion. It gets a lot of hate, but some of the features that ship with CF (templating, extremely easy db access, PDF generation) made/make CF a rather powerful and fun language. If only Adobe had chosen to release it for free...

You're right about it being an odd choice for a Linux company, though.

You can build on CF for free using Railo.

Linode exceeds technically in providing reliable VPS instances. Unfortunately, that's overshadowed by their lax approach toward security and their inability to be transparent when problems come up. This is not the first incidence where their lack of transparency has come under fire.

This latest update dances around the issue and doesn't go far enough to explain what happened, whether or not your data is at risk, and what is being done to rectify it. If you are a Linode customer reading this, there is no indication as to whether or not you should cancel your credit card number. You, the customer, are left to decide on your own without all the information.

Does Linode have an obligation to advise their customers to request new card numbers? It doesn't look as if they card information was stored in compliance with PCI-DSS and their systems were breached and private information was compromised. Is "the private key was also compromised but we don't think they can guess it, so let's just move on" good enough?

Customers shouldn't have to read between the lines on security updates like this. I expect the companies I deal with to be up front and transparent, even if it makes them look bad. The reality here is that they messed up, not because they didn't patch a week-old exploit soon enough, but because of the poor security practices they had in place internally that allowed the hacker to get the information they did, and by not being up front with their customers to begin with.

Any company can tell you that they take security seriously and that their customers best interests are their highest priority, but what ultimately matters are your actions. What are your security policies? What is your company culture toward security? How quickly do you announce problems and how transparent are you in those announcements?

> It doesn't look as if they card information was stored in compliance with PCI-DSS

Why do you say that?

The requirements are basically to use strong encryption, only store those encrypted numbers on a system not accessible from the public internet, have auditable access controls and logs, and have a procedure for keeping systems up-to-date. The last 4 digits of the card are not subject to the same requirements as the full account number. All of that doesn't prevent the scenario that happened playing out -- someone gets into an internet-connected server, then can connect from there to the database server with the encrypted credit card numbers.

The requirements also include responsible storage and management of cryptographic keys so that individuals are not able to access your credit card database and private key, in which case the only thing standing in between them and your customer credit card information is a password.

So they violated the whole "only store those encrypted numbers on a system not accessible from the public internet" part.

No, they didn't... You have to be able to reach the server over a network somehow to be able to store new cards and charge existing ones. The requirement is not that the server have no NIC, just that it not be internet-facing itself. The DB server was only accessed through the compromised web server, not directly through the public internet.

The attacker claimed that he compromised their webserver. If he had access to the private key and encrypted numbers, then they were indeed being stored on an internet-facing machine.

  07:52 < HTP> the CCrypter class of the linode application
  context was accessable from outside the wwwroot using
  undocumented ColdFusion methods. i was fully able to
  decrypt the ccs using the in-memory privkey that they
  supplied the password for.

Let's make this more clear:

Linode has two servers. One is a web server that runs their website and contains its source code. One is a database server containing customer information and encrypted credit card numbers. The web server is connected to the internet. The database server is not. They communicate over the private network.

The web server is compromised. The attacker has the source code, which includes the IP/host and login credentials for the database server, since the website code talks to the database to function. The attacker can now access the encrypted credit card numbers in the database by connecting from the web server.

At no point are the encrypted numbers stored on an internet-facing machine, but the attacker now has access to them. There is no evidence that says otherwise. There's no way to build this that doesn't have a path from the internet to the card numbers either, since eventually they need to get onto an internet-connected machine to talk to the payment gateway and actually charge those cards.

> There's no way to build this that doesn't have a path from the internet to the card numbers either, since eventually they need to get onto an internet-connected machine to talk to the payment gateway and actually charge those cards.

I have to disagree.

Web server has write-only access to a remote database table on the accounts/billing server, writes account change requests; accounts/billing server polls table, matches account numbers to billing information, submits charge requests to payment gateway via a firewall that allows access to only a single IP or set of IPs; accounts/billing server writes charge result back to another table; web server has read-only access to the results table and subsequently changes the user's account type or sends out an email or performs other administrative duties.

That's how I'd build it, I've done similar stuff before. (I have a paranoid business client.)

The nice thing about this is that you can capture credit card information on the web server, encrypt it with the public key, and then decrypt it using the private key which is only on the accounts/billing server.

Getting unauthorized remote access to the accounts/billing server would be a challenge. If they needed authorized remote access to it for customer service reasons on ports other than ssh, I'd use authpf on the pf firewall sitting between the accounts/billing server and the rest of the network, along with a unique password-protected ssh key for each employee. But, really, 1 or 2 sysadmins are the only people who should have non-console access to it.

  > The web server is compromised. The attacker has the
  > source code, which includes the IP/host and login
  > credentials for the database server, since the website
  > code talks to the database to function.
IMO, if internet-facing machine A is storing login credentials to machine B, then machine B should be considered internet-facing.

There is no reason for a properly-designed system to be storing passwords or credit card numbers in the same database as the rest of the user's data. Such security-critical data should always be placed on a hardened machine, with the web server communicating via a small and easily-auditable interface.

> communicating via a small and easily-auditable interface

... isn't that called 'a network'? This seems to contradict your earlier statement about 'machine B should be considered internet-facing.'

It's not a contradiction, because 1) the webserver would not have login credentials to the hardened machine and 2) the interface would be much smaller than "everything in SQL".

A typical example of such an interface would expose about four operations:

  // Return true if the given user/password pair is valid.
  bool PasswordValid(string user, string password)

  // Return true if the password was changed successfully.
  bool ChangePassword(string user, string old_password, string new_password)

  // Returns a reset token.
  string RequestPasswordReset(string user)

  // Returns true if the password was reset successfully.
  bool CompletePasswordReset(string user, string token, string new_password)
With reasonable changes to support two-factor, if needed.

Credit card data would have a similar interface -- add card, remove card, list user's cards without full cc#, charge card.

The advantages of this over the sort of "hashed passwords in MySQL" design are obvious. Not only is sensitive data protected against "SELECT *", but it's now possible to apply security policies to password management. For example, the password service might enforce rate limiting on how often a particular user's password can be checked -- that way, even if the web server is compromised, the attacker will be unable to compromise passwords any faster than they could via the standard login screen.

The password machine would typically be configured to have only the password service and SSH running, with SSH access limited to a key that's stored in a safe somewhere and used only for emergencies.

If a developer doesn't feel comfortable building such a system themselves (reasonable), then there are many commercial products available. They're expensive for a home user or ramen-budget startup, but a company like Linode would be able to afford one easily.

That'd still be 'internet-facing' though, by your definition. Yes, it might require one more round of cracking (the simple interface), but the machines are still connected.

No, it would not. Please read my post again. The webserver would not have login credentials for the password management machine.

Additionally, I object to your implication that any network service can be trivially compromised with "one more round of cracking". This is a view of software security that is propagated by science-fiction movies, and is not grounded in real life. Vulnerabilities that permit remote code execution are relatively easy to prevent by taking appropriate precautions when designing and implementing a network service, and almost every such vulnerability has its root cause in doing something obviously insecure (e.g. writing in C/C++ or a language with eval(), passing user input into an OS procedure).


Could you share the source of this statement?

It's from the pastebin'd IRC log: http://pastebin.com/7WXRDyAg

Why would you trust this so called "hacker", it's that Marcan clown after all.

> Is "the private key was also compromised but we don't think they can guess it, so let's just move on" good enough?

You have to take that a step futher... "we don't _thing_ they can guess is, and we don't _think_ they were able to recover it from RAM"

I really wish they would officially comment on the 'cover up' aspect. Security breaches happen, and are forgivable. But attempting to broker a 'silencing' deal with the intruders and hoping your customers will never be the wiser is not.

All this update does is restore my faith in their ability to store my information correctly. It does nothing to reassure me that they won't try to cover anything up again.

I feel like there is a fine line between a "cover up" and a "grey hat disclosure". Reading the IRC logs from #linode [0], HTP hacker ryan* seems to say that they made a deal of "we don't tell if you don't tell" but then Linode broke the deal by reporting them to law enforcement.

You could see this as a cover up or you could see this as a disclosure from the crackers to Linode. I see it as a cover up, since Linode should tell us anyway that servers were broken into, but I could see how it could have been seen as reasonable disclosure.

In any case, if what ryan* is saying is factual, then Linode reneged on the deal, a shady one in the first place, HTP has credit card hashes and evidence that Linode stored Lish passwords in plain text (a log maybe?), and the only thing separating them from credit card numbers is the passphrase on the private key, which is hopefully a strong one.

[0] http://turtle.dereferenced.org/~nenolod/linode/linode-abridg...

Perhaps nitpicking, but looks like HTP might have gained password hashes and encrypted credit card numbers, as well as a passphrase protected private key file. That's if I'm reading between the lines of both ryan and Linode's statement.

This is why you build security in layers. If one layer gets broken, there are other layers to protect you.

I'm not underestimating the severity of this incident, but I think it's good to see this layered approach being used by Linode. If the passphrase was strong enough, and not stored anywhere, then I doubt HTP would be able to gain access to the private key, even if they got the actual private key file.

Sorry, 6am here, that's what I mean. It is good to see Linode adopting this layered approach, though it is what you're supposed to do. The other route would be storing the private key file unencrypted, which, thank goodness that wasn't the case. Knowing Linode, I'm fairly confident it's a strong passphrase; it would be imprudent to secure credit card details with something like "swordfish".

Well, the official statement re: the passphrase is:

> @Eivind – our private key is stored only in encrypted format. The passphrase is not guessable, sufficiently long and complex, not based on dictionary words, and not stored anywhere but in our heads.

> in our heads

So it's short enough to remember and likely has some sort of pattern. There's a limit to what a person can remember, lower if there are several people that have to remember it.

It is not very difficult to memorize random strings of arbitrary characters. I use a password manager to manage most of my accounts, but the important ones, like banks and email, I keep in my head. I use my password manager to generate a 15 character string of alpha+numeric+symbols. The symbols would kind of make it hard, except that in my head they are just upper-case numbers, mostly (shift-7, not ampersand). And in any case they are just positions on a keyboard (God help me if I need to enter one from my cell phone).

To memorize, just copy it into your favorite text editor, then type it 25 times in a row and delete. If you are paranoid, make sure you use a text editor that does not store temp files. Do not save this password anywhere. Set a timer and do it again an hour later, then again the next day. 10 minutes of your time and you have a password in your head. I can keep 10-20 of these at a time, maybe more since I seem to be able to type older ones from years ago.

I don't consider myself to have a great memory. I can barely remember lyrics to songs I've listened to dozens of times and it takes me hours and hours to memorize lines for plays. But I started doing this for passwords ten years ago and it is very reliable.

The thing is for your personal bank account a 15 character password is acceptable.

But for x many customer credit card details you're really looking for a much longer password that that. I'm talking 64 characters or more of pure random data.

You shouldn't be compromising for the convenience of being able to remember a password when it secures such critical data in my opinion.

Edit: I do agree though that your method is a very good way of remembering password.

At 15 characters and my character set ( [a-zA-Z0-9] and about 30 symbols) I have about 92 bits of entropy. Mean time to find a collision hash of my password is more than several years using 100% of computing power on the planet, much less do AES brute force. If memory is no issue - 256 bit passwords (usually displayed as 64 hex digits) are wonderful and there is no reason to stop short of that for pass keys that are stored electronically.

If I was responsible for this key I might increase from my normal 15 to 20 characters, giving me more than 120 bits of entropy, and I would expect to be safe from offline brute force for decades, and I could remember it.

It's trivial to memorize an entire sonnet. Actors and actresses memorize many times that amount. It's also trivial to write a sonnet. How many bits of entropy do you think a sonnet has?

Are you saying that you are happy to type in an entire sonnet when prompted for a password?

Being realistic, to expect someone to type in such a long password regardless of if they can remember it or not is clearly unreasonable.

For a consumer, perhaps, but for protecting thousands of individuals...

You can't assume this. It could be ridiculously strong, and, with a lot of use, has become remembered.

I'd argue that actually it's better to assume the worst case here, not what it potentially could be.

That and the fact that an offline attack can be run on this key is not promising.

You've assumed that everyone involved has the entire passphrase.

They were storing LISH passwords in the clear... Does it really sounds like they care enough to use some sort of multi-party brokered passphrase accountability system?

Not really, just more than one person as is implied in the grammar.

Edit: Theses "making an assumption" arguments are silly. It is good practice to assume the worst case, to assume the best in this situation is bad.

I'm impressed by your ability to extrapolate "in our heads" to mean whatever you wanted it to mean.


Even if they have a passphrase it has to be strong enough to withstand brute force for months. If you're not smart enough to keep the private key separate from the crap you're trying to protect, why should I think you're smart enough to ensure your passphrase is good enough?

why should I think you're smart enough to ensure your passphrase is good enough

Key strengthening.

OpenSSL doesn't use key strengthening on password protected RSA keys. GPG does, but I don't know how much it does by default.

Except that LISH passwords were plaintext...

> All this update does is restore my faith in their ability to store my information correctly

How is your faith in that regard restored, when they just said they stored passwords in plain text? They didn't store passwords in plain text by accident, they chose to. They knew the risks and didn't care.

"Linode Manager user passwords are not stored in our database, but their salted and cryptographically hashed representations are."

Sounds like it's not stored in plaintext.

And then "There were occurrences of Lish [the Linode Shell] passwords in clear text in our database."

"There were occurrences of Lish passwords in clear text in our database."

Which is entirely different from "all passwords were stored in plain text". We don't know whether there were 1, 100 or a majority of such passwords. It's a bad oversight in each case, but the latter is quite a bit worse than the former.

Right, we don't know because they failed to disclose this.

Wouldn't you rather have them working on assessing, containing and repairing the damage rather than catering to the impatient internet crowd that is so used to immediate updates on everything that they can't fathom putting together a responsible, useful, correct response might actually take a while? Honestly, these people don't understand what it takes to run a company and handle such an incident. As far as i'm concerned, the indignant vocal minority could shove it while I was working on resolving the problem for the majority of customers that exactly that from us.

Quoting Linode:

"There were occurrences of Lish passwords in clear text in our database. We have corrected this issue and have invalidated all affected Lish passwords effective immediately."

Presumably, they know how many customers are affected, because they invalidated their passwords. They didn't tell us.

I don't understand where you are coming from here.

Why do you think companies can only do one thing at a time ? And where have you worked where software engineers are drafting press releases or conducting security audits ? And why shouldn't keeping customers (who have their own apps) informed be their top priority ?

  Why do you think companies can only do one thing at a time?
Amdahl's law. Recovering from a catastrophic hack isn't parallellizable. Nobody can draft a press release before the security audit has been performed. You can't inform anyone before the security audit has been performed. The software engineers can't go to work before certain parts of the security audits have been performed. Certain parts of the security audit can't be performed before other parts have been performed.

  And where have you worked where software engineers are 
  drafting press releases or conducting security audits?
A bootstrapped startup? Depending on the scale of the company (and I have no clue how many people work at Linode), the work may be sequential simply by lack of manpower.

I believe the correct approach for us is to assume all Lish password where in clear text until a good explanation is given. It's not usual to store some password in clear text and encrypt the others, right?

Could be that the passwords were hashed, but something logged a password, or there was leakage somewhere during the login process.

All it takes is one debug log statement to leak through from development and you can be logging plain-text passwords on every login.

Storing passwords in clear text is definitely not normal.

Also the social engineering implications could be huge. Many people use the same password across various sites so in theory there could be a lot of compromised email accounts. Which could then mean compromised internet banking sites, trading sites etc.

Bear in mind this can be as simple as a handful of people emailing their LISH password to support while asking if it is correct etc.

Or someone mistakenly logging "Bad LISH Password: 'FOO' for user 'BAR'" somewhere that logs to the DB.

Or users accidentally typing their password in the username field.

Nevertheless, this isn't the first time Linode has needed to spend their benefit-of-the-doubt points.

Also (and correct me if my understanding of this is wrong since I've never used it), LISH is just a remote terminal service. You still need to know the VM's root password. And frankly, if you're doing it right, the root password should be a long, unique random string that you store somewhere safe and never use since you should be using keys to login to your box on a day to day basis.

It's quite likely that someone has left root logged in on hvc0 (the LISH console) on their Linode, while logging out of LISH. Probably more than one person.

Also, access to LISH even without a root password provides access to some scrollback output, which could expose sensitive information.

Also, LISH allows multiple connections, all of which see the same console, so the attacker can just connect to it and wait for a root prompt to appear when root logs in next. (Does changing the LISH password prevent this attack if they're already connected? I doubt it.)

Also, most distribution boot processes can be messed with at boot from the console. For example, you can ctrl-c to stop important daemons from loading. In some cases you may be able to get a shell without the root password.

Also, Magic SysRq can be accessed over a serial console by sending a BRK. You do not need to be logged in to do this, and it could be used to kill processes, reboot, etc. I don't know if LISH allows sending BRK.

Yeah but most of these are generic problems with providing a remote serial console (excluding LISH allowing multiple shared connections to one console, which is obviously bad).

They've now reset all the passwords and fixed the bug that meant some of them were being stored without encryption. So what we're saying is there was window of a week or so where only LISH users affected by the clear text bug may have been open to an attack if they happened to use LISH during this time frame and the attacker was targeting them. Not great, I agree, but could have been worse.

Hopefully, they'll change their setup going forward so that each LISH connection to the same VM gets its own console.

Shoot, you are correct. Definitely an attack vector.

The connection is to the same TTY when I log in via the web terminal and SSH at the same time, so if you are logged in via LISH anyone with the LISH password has access to the logged in user's console + scrollback.

I wasn't able to send BRK from Putty though.

I'm pretty sure this is correct. The LISH passwords just give you console access via ssh (as an alternative to the web-based terminal in Linode Admin), which is essentially the equivalent of getting to an ssh prompt. A login/password would still be required to access the machine.

It doesn't appear that you can reboot or enter single user mode from the LISH console login prompt.

I'd imagine the 'cover up' is because law enforcement is involved and this 'ryan' character is being looked for.

As these situations, especially on the internet, come out as "he said, she said", I think it's probably more important to keep focused on what directly affects you.

Sure, but in this case it wasn't Linode who initially came forward - it took a public announcement from the intruders for Linode to notify us.

Devils advocate: preparing a press release from a company after plugging holes and auditing that makes the appropriate admissions and apologies probably takes more time than writing up a successful hit after attacking a page.

Linode didn't just take their time to announce this, just a few hours ago they were apparently telling customers definitively that their credit card information hadn't been accessed.

And that is "still" the case

My guess is that they probably went straight to the authorities, while playing along with the intruders as a way of buying time and collecting as much information on them as possible. I'm not surprised that they wouldn't make an official comment on that, though.

"There were occurrences of Lish passwords in clear text in our database."

I think that this is what's most criminal about the whole event. It's probably safe to assume that "there were occurances of Lish passwords in clear text..." is a euphemism for "we stored all Lish passwords in cleartext."

Simply invalidating these is probably not enough. It's very likely that people reused their root login passwords as their Lish passwords, meaning that Linodes running without firewalled sshd are potentially vulnerable.

I hope Linode communicated who had "the occurrences" of compromised cleartext Lish passwords, so that those users could take appropriate action.

> "It's probably safe to assume that 'there were occurrances of Lish passwords in clear text...' is a euphemism for "we stored all Lish passwords in cleartext."

This borders on libelous, in my opinion. If all Lish passwords had been stored in the clear, I think they would have said that. They've been pretty specific in the rest of the update.

They say they have "invalidated all affected Lish passwords effective immediately". I just logged in to Lish via SSH using the new password I had set on Friday, so I guess mine at least wasn't one of the "occurrences"....

Lish passwords in the clear might have been in support tickets stored in the same database, or chat logs.

Also: "It's very likely that people reused their root login passwords as their Lish passwords..."

Really? I guess it's never a good idea to doubt people's capacity for stupidity, but this seems very obviously a bad idea. Hopefully this isn't so common as to be "very likely".

I've been a Linode customer since the start of 2007 and my biggest worry right now is this...

> Credit card numbers in our database are stored in encrypted format, using public and private key encryption. The private key is itself encrypted with passphrase encryption and the complex passphrase is not stored electronically.

By not providing further clarification that the private key was stored outside that compromised host in that update, you can read between the lines that they are trying hard not to address that particular issue.

I really don't want to assume that the attacker can now brute force the private key's passphrase, and it might not take long if that "complex" passphrase was a word out of /usr/share/dict/words.

Or worse, that the billing process was monitored via the compromised network, RAM, or a key logger - to get the unencrypted private key or it's passphrase.

Why won't they just tell us the private key's passphrase, so we can judge for ourselves how secure it is?

Once they give the private key passphrase out there would be no need for us to judge for ourselves. The hackers would have everyones credit card information at that point.

He was joking

I'd recommend reporting your card likely stolen and getting a new number. The slight inconvenience is worth the peace of mind.

I recommend not bothering. Liability for fraudulent CC activity (in the US) is very low (max $500), and banks never enforce the liability for two reasons: 1) it's easier for them to chargeback the transaction to the vendor, thus they're not out of pocket, and 2) it causes customers to leave for a provider that doesn't hold them liable.

How much of your time spent chasing down and changing a CC number is worth a maximal risk of $500 with a vero low probability of occurring?

EDIT: Thanks to nenolod for the correction!

Since Dodd-Frank, debit cards and credit cards have the same liability protections.

> maximal risk of $500

I'm pretty sure you're off by a factor of ten. If someone makes fraudulent charges with your credit card, your maximum liability is $50, and you'll only have to pay that if the charges occur after you've reported the card stolen. (There are different rules for debit cards, that doesn't apply here.)

The lish passwords are the easiest to abuse immediately, but the larger impact is on all the user accounts with weak reused passwords. While all of the manager passwords were reset, each VM's root password was not. Once a hashed password is cracked their root password is vulnerable (if it was reused). And that's probably a lot more people than had exposed lish passwords.

Everyone should go reset their virtual machine's root password if they reused them.

Exactly! I'm pretty sure it was all Lish passwords. Yesterday the news was all focused on the credit card information but now it makes sense why Lish passwords were not working (when I was 100% sure I had it correct--Friday's email said nothing about lish iirc). Interestingly I even mentioned this as part of a ticket and they brushed it off as nothing... hmm.

sabat: You are hellbanned, FYI.

This comment is the last live comment: (https://news.ycombinator.com/item?id=5399303)

This comment is the last dead comment: (https://news.ycombinator.com/item?id=5399305)

What isn't in this post:

- Any mention of the claim that they had made a deal with the hackers. No attempt to deny the claim.

- Any mention of when the systems were compromised or when they discovered that they were compromised. The post starts with "Yesterday," mentions working "around the clock" and mentions Friday, giving the impression that this is all very recent, when according to the hacker Linode has known for two weeks.

If it was your company and you had decided to make a deal with the hackers, would you publicize or publicly address that fact? My thinking is that doing so would incentivize others to do similar acts because you are an easy mark.

I find it somewhat worrying that they haven't directly address the points of the hacker's claim.

I am particularly interested to find out whether linode did enter into a deal with the hackers.

As a computer user, I'm thoroughly tired of Adobe's products constant stream of security vulnerabilities. Flash, Adobe Reader, ColdFusion. I avoid their products mostly for this reason. Flash, which I can't avoid completely, I have set up to only run when I say so, which I only do for a couple websites I trust. And why should a static document reader be able to run arbitrary code completely eludes me.

Many companies should learn the lesson from Microsoft's huge shift years ago when they implemented the Trustworthy Computing Initiative. They have much larger targets (an operating system, a hypervisor, a database server, a full office suite, etc.) compared to a browser plugin or a document reader, and in my experience they're doing a far better job at keeping security at an acceptable level.

Comparing IIS+ASP.NET security versus some frameworks such as Rails (another huge source of vulnerable web applications) is also enlightening.

> why should a static document reader be able to run arbitrary code

Because PDF was never meant to be a static format, and Adobe Reader was never meant to be a static document reader.

However, since most PDFs are static anyway, I think it would be a very good idea for PDF reader softwares to treat every PDF as a static document by default, and only allow scripting after the user clicks Yes on a scary-looking warning.

Correct. I wasn't clear, so let me clarify: I regard every PDF I read as a static document. That's how I see them and that's how I read them, with readers that treat them as static. I have no use for a PDF document that wants to acts as software and I wouldn't use a bug-nest of a piece of software to run it.

You're not the target market for those types of PDF's then. In large corporate environments, PDF's are roughly the equivalent of small web-apps that handle a lot of the data entry and sharing across the organization.

I'm not a proponent of this approach, but it is what it is and it is the reason that Reader is such a bloated security-ridden nightmare.

Then those features should only be enabled by default in corporate versions of Adobe Reader. Average internet users have very little use for non-static PDFs, except the occasional "fill out this form" PDFs where you can't save any changes.

They reset the passwords on Friday. Even assuming that they've been working for the last four days on discovering the extent of the intrusion, there's a pretty big gap between Friday's announcement:

"This activity appears to have been a coordinated attempt to access the account of one of our customers. This customer is aware of this activity and we have determined its extent and impact. We have found no evidence that any Linode data of any other customer was accessed. In addition, we have found no evidence that payment information of any customer was accessed. ... Out of an abundance of caution, however, we have decided to implement a Linode Manager password reset,"

and their disclosure today. I find that I'm uncertain whether they would've released today's disclosure if it weren't for the IRC chat log getting passed around online.

I don't get why they thought it would be prudent to reset everyone's manager passwords "out of an abundance of caution", but didn't bother to mention the possible leak of payment information. We don't have any information on the strength of the passphrase used (nor can they provide any, of course); we have no idea how feasible it might be to brute-force it.

Even if the full credit card numbers are never broken, the last four digits are still available in the clear. That has been enough to give unauthorized persons access to accounts on other services in the past (see the Matt Honan thing from last year: http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-hona...). Depending on what other information was stored in the database, customers may be at risk of having accounts compromised elsewhere using the billing information from Linode. Linode really should have included that information in Friday's announcement.

Also, ColdFusion has had 103 different CVEs going back to 1999 (http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=coldfusion). Adobe doesn't really have a sterling record when it comes to this stuff. A nice example is the recent CVE-2013-0632 (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0632), a remote code execution exploit that was discovered already in use back on January 4th but didn't have a fix available until January 15th. Linode -- and their customers -- are a really big, juicy target for future CF vulnerabilities.

This is what Caker (founder of Linode IIRC?) has to say about the strength of the passphrase:

"our private key is stored only in encrypted format. The passphrase is not guessable, sufficiently long and complex, not based on dictionary words, and not stored anywhere but in our heads." [1]

[1] http://blog.linode.com/2013/04/16/security-incident-update/c...

Doesn't make sense. Who here thinks they typed that in manually for every single CC they got? They didn't, encryption/decryption was automated, meaning anyone could have done it if they had access to the process that normally does.

Also, the post says salted sha256 passwords are useless. Evidently, they do not have a clue how to safely store passwords, because that is not useless at all.

Finally, it's an outright lie that systems were not accessible, because we already know that they hacked seclists.org/nmap. Even their previous post acknowledged this.

It's a private key. You need the public key to encrypt, which is not password-encrypted.

Although I still wonder about how they run recurring charges.

It could be a manual script that he runs which prompts for the passphrase. I know that earlier in Linode's history, caker made it seem as if it worked that way from the way he acted in their IRC channel when the batch transactions were running.

This seems likely. Once a month, run a script, type in the complex passphrase, and all the billing gets done at once.

I mentioned this elsethread, but I've had changes to my account get charged to my card immediately (within minutes), even in the middle of the night. Last example of this was January 2012, so they may've changed their billing practices since then.

I suppose they could also keep the decrypted key in memory using something like gpg-agent.

I read somewhere (can't remember where now) that the hacker supposedly dumped the key from memory. My guess is they type in the passphrase on boot or something, and it lives in an agent thereafter.

I haven't seen that anywhere, and I find it unlikely.

I understand this, and any system that needed this information would not work if it relied on a human to decrypt.

I think the most likely answer is pretty simple and non shocking: the key is loaded via an agent on boot (subject to the correct passphrase being entered) and stays resident in memory (either permanently or for x minutes/hours). Where's the problem?

If they did it that way, then because the attackers had access to the keys and the server code for an unknown period of time (up to maybe a couple of days), they may have been able to either extract it from memory or use the private key with someone else's credentials to decrypt some or all of the payment information.

I'm really trying to avoid speculation at this point -- we simply don't have enough information to make smart guesses -- but I think we can reasonably conclude that nobody at Linode was manually keying in a password every time there was a charge to a customer card, and therefore it's not insanely paranoid to cancel any cards associated with Linode.

Really the most annoying thing here is Linode's obtuseness about the attack and their processes and what's at risk, and if I end up moving to another service (as at the moment I'm inclined to), that will be the reason -- not because they got compromised, but because they valued their own butts over their customers'.

They're a VPS service for chrissake. They could explain in very technical terms exactly how manager passwords were stored and how the credit card information was stored & accessed and what other billing information may have been compromised -- a full disclosure -- and most of their customers would not only understand most of it, but they would appreciate it.

Sorry, I kinda went off on a tangent there, but the more I think about how this has been handled, the less I feel like I can trust them.

A single private key passphrase is all that stands between a bad guy and everyone's credit card information... Well I know I feel better...

But jokes aside, there is a huge gap between the strongest and weakest algorithms used to passphrase protect a private key. I wish they gave us more information about the certificates.

Including: is it ONE private certificate for everyone? Or one per client/customer? How was the key pair generated? When was the key pair generated? Which algorithms does it utilise and at what key length?

> A single private key passphrase is all that stands between a bad guy and everyone's credit card information... Well I know I feel better...

Where do you get that from? They encrypt all of the CC info using the public key. You need the private key AND the private key passphrase -- and one of them isn't stored on a computer.

> Where do you get that from?

The article and the IRC log.

The IRC dude said flat out that the "bad guys" have the private key and the encrypted data. They also have made no claims that turned out to be untrue yet.

The Blog post infers that the bad guys have the private key by talking about passphrases. Why are passphrases on the private key even a discussion topic unless the private key has fallen into "enemy" hands?

Therefore the ONLY thing between them and unencrypted data is the passphrase protecting the private key.

The private key shouldn't be on any machine that it isn't used on.

If we assume that the key is used on that machine, and is password protected, and is used automatically, then the unencrypted key, or its password, must exist in memory, if not on disk. Perhaps they log in when the machine boots and manually enter the password once, so it is cached and can be used for repeat decryption processes.

If the key is in memory, then a blackhat who knows what he's doing can get access to it.

This isn't a problem unique to Linode though. The same risk is present for every system that needs a password/passphrase to decrypt. This is why effective security (not absolute - which is impossible for any system that has to connect with other systems) can only be achieved through a layered approach.

Linode layered pretty well: Hashed and salted passwords, isolated systems, encrypted CC database and passphrase protected decryption key with no written record of what that passphrase is. They didn't do a perfect job but then who does (or even can)?

I'm pretty sure UnoriginalGuy understands that. What he says is correct, the attackers now have the private key file so all that prevents them from decrypting the CC data is the passphrase. They don't know what this is but they can try and brute force it which is why it's relevant to wonder what cipher the key uses, how many bits and how long, random and complicated the passphrase text is.

Hopefully, they used 4096 bits and a random >20 char string as the passphrase. That combined with a standard cipher algorithm (DSA, RSA) should be enough to keep us safe.

Cracking even a 1024 bit RSA key would cost at least millions of dollars. Credit card numbers are not worth that much.

Tens of thousands of credit and debit cards, many of which are likely to be corporate cards with extremely high spending limits ?

Worth no more than $1 a card on the black market. It'd be cheaper to buy cards than to try to crack the key. There's more supply than demand for stolen card numbers.

That's because the black market for credit cards is a lemon market [1] - known good corporate cards are much more valuable.

[1] https://en.wikipedia.org/wiki/The_Market_for_Lemons

Nobody needs to crack the key: merely the keyphrase. Bruteforcing is relatively cheap.

> What he says is correct, the attackers now have the private key file

I will concede that this is highly likely, but they did not confirm that the private key was taken.

It is implied that it was taken.

The IRC log said that they had taken it and since Linode is talking about the passphrase in their blog post, the implication there is that it was taken too.

If the bad guys didn't have the private key then the passphrase is literally irrelevant to the discussion. Since they would have to break the primary cryptographic key rather than just the passphrase.

A passphrase protects a private key file NOT the data. So why would Linode even bring it up?

> A passphrase protects a private key file NOT the data. So why would Linode even bring it up?

I would bring it when discussing how I cryptographically secured sensitive data. Especially, when people might be confused and try to argue that they should have used pbkdf2 and not encrypted the information (because they were thinking about password hashes).

At least if someone tries cracking the passwords they won't have to bother with a dictionary based attack or short strings ;)

Unless of course it's a double-bluff.

Digression: That would be a single bluff.

Bluff: Saying one thing, doing the opposite ("our passwords are really long" (actually, they are short).

Double bluff: Saying one thing, and actually doing it, as everyone will assume we are bluffing. "Our passwords are really long, honestlyl! (and they actually are, but bad guys will assume we are bluffing and using a short password).

Except you know, system memory when the key material is unlocked... unless you really believe someone is sitting there typing in the passphrase for each non-batch CC transaction, because I do not.

What exactly do you propose as an alternative? At some point the card details need to be decrypted. Security always boils down to somewhere between total lock down and being able to actually get things done. The only semi reasonable alternative would be not to store any CC details and outsource that task to a payment processing company. It's not a great alternative though as you'd effectively be giving up the ability to swap payment processors (since they, not you, now have all your customers card details) for a potentially small and unproven bit of extra security.

Lets be honest here. How frequently do you think a company changes processors?

Agreed, however, imho you'd have to be insane to run a company with hundreds of thousands of monthly billed customers and turn control of the card details over to a third party (who, lets be honest, will be running their backend in a very similar manner i.e. encrypted CC details protected with a passphrase). I'd take my chances the same way Linode chose to rather than punt the issue upstream and hope for the best.

I might be biased though as my default approach to most things is to do (and have control over) as much as possible yourself unless there are extremely good, unequivocal reasons not to.

Honestly, the gap can probably be attributed to them not knowing what exactly happened. Figuring out that you were compromised by a zero day vulnerability isn't an easy process to work through.

> we have no idea how feasible it might be to brute-force it.

Sorry to be morbid, but... there's also rubber hose cryptanalysis. Whether the payoff or actors involved here make that a legit concern I don't know.

I know Linode has been getting hammered for not being trustable or that they don't respond well when contacted, but in my experience they have been extremely responsive to any issues I have raised. I don't just mean technical issues either, we had what looked like a security breach on our account and it kicked of a small frenzy in their support staff and supervisory team to track down the problem. They figured out what had happened after a thorough investigation, gave us a full post mortem and promised they were reviewing procedures to make sure it couldn't happen again.

I've always had an excellent experience when dealing with Linode one-on-one, via email or support ticket, but their one-to-many communication sometimes seems to be lacking.

I'm glad they posted a more detailed update regarding this issue publicly.

I can second this - Linode has always responded quickly and effectively in my experience.

I third this. I've never had a problem. Most of my tickets have a response within 10 minutes and they always seem to have been open when anything has gone wrong on their end. Is anyone not thinking that the reason they didn't come out in public talking about this until now is because law enforcement was involved? Notifying the public could ruin the investigation.

No. Because Linode has been hacked before and exactly the same situation is playing out. Major compromise and selective release of information.

I really do wonder at what point people are going to realise that they aren't dealing with a company who cares about their interests.

This seems to happen when they think it isn't their fault. As soon as it's something that could reflect negatively on them they put out the smallest most vague amounts of information possible. That happened with the xen security issue, and with the previous hack.

If the hackers got to the ColdFusion source code then it is quite possible they decrypted credit card numbers.

The passphrase may not be stored in the database but presumably the decryption key is kept in memory by the app server while running. Otherwise it wouldn't be able to charge anything. And if the hackers could modify the ColdFusion source code, they can get to anything the app server can, including the decryption key. ColdFusion uses an interpreted language so injecting code is possible with just filesystem access.

Linode says there's "no evidence" of decrypted card numbers but I think it's more a matter of no evidence.. yet. They also call salted hashes "useless" to hackers which betrays a rather unenlightened perspective on infosec.

From what HTP said on IRC, the credit card numbers were decrypted within the application context, which would be consistent with what you said. But during the IRC conversation, HTP willingly gave out the last four digits of people's CC#s, but ignored any request for digits before that, which would be more consistent with what Linode is stating.

It's all a very confusing world where nobody has a definite answer. Just 2 cents, don't act like I'm sharing a fact or anything

There's no inconsistency. Linode said "Credit card numbers in our database are stored in encrypted format... Along with the encrypted credit card, the last four digits are stored in clear text..."

Last 4 is all you need to wreak some serious havoc though. Service cancellations and whatnot...

Good to see some discussion and acknowledgment from Linode, but their treatment of the credit card encryption is distressing. They don't address the claim that the private keys were on the servers' filesystems, and instead focus on them being password protected. If the password is brute-forced, then it's game over.

I think a lot of people are failing to understand how credit card processing works. The private key needs to live on some machine, if you actually want to have any CC requests processed at some point. If the private key itself isn't on the frontend/user-facing server, then instead, when the frontend node is hacked, the hacker will find a shared secret or signing key to auth the frontend node against the backend node that does process the CC requests.

The hacker won't be able to take your CC number for themselves in this case, but they'll sure be able to put spurious/malicious charges on your account.

CC data however does NOT have to live on any machine that anyone except your merchant bank owns. If you are writing against billing APIs in the 21st century then you only need to store a token which you use for recurring billing. Not everyone does this, but it's a reasonably good practice, especially if you have had intrusions in the past.

Out of curiosity, is this token generation sufficiently general or is it tied to a specific bank/credit card provider? (Basically is it easy to switch providers without having all your customers enter their cc information again?)

Unfortunately it is tied to the merchant. :(

From their description it appears encrypted CC numbers were in the database amongst the other customer information. Sure, the data has to live somewhere, but the apparent situation of CC, customer, private key all accessible to the frontend looks sub-optimal.

there's a difference between obtaining the private key and the private key file, which according to Linode was protected with a (hopefully strong) passphrase...

It might have been encrypted, but surely it must be exposed to their system somewhere to enable them to make charges? Is it feasible, if that was the case, that the attackers could use that to exfiltrate decrypted CC info?

I suppose this wild speculation isn't helpful or conducive and waiting on more information might be a better idea. I think the reputation damage has already been done, judging by the comments on the previous post though.

It depends on how they implemented it. Maybe there's an operator that logs on once a day and enters the passphrase, and then they use it to charge credit cards. So the private key is exposed only for a brief period.

Another possibility is that the passphrase is entered when the machine is booted, and then the private key (not the passphrase) is loaded into memory. This is how e.g. ssh-agent[1] works. When this mechanism is used, you don't get access to the private key itself without direct memory access (which is possible, but not that easy to carry out, and would typically require root access and debug tools etc). However, the private key can be then used to carry out crypto operations.

From the IRC chat logs, it looks like ryan got the private key file, but he didn't get the private key itself. At least he didn't seem to provide any solid proof to suggest he did.


It's possible to debug targets running under the current user, so root privileges might not have been needed, if they had an agent running under the same UID as the application.

> However, the private key can be then used to carry out crypto operations.

Surely this is of concern? According to the logs they'd known about the breach for a week - that seems like enough time to decrypt each card one by one and exfiltrate them.

Why would you keep the private key file on a machine on which you would never enter the passphrase?

I suppose it could have been kept only in memory, rather than just chilling on the filsystem in a file called private_key which is basically what happened AFAIK. It's surely within the boundaries of sense to load the key manually each time the app boots.

Thinking about it though, that's basically the role of the key passphrase.

Right; this is a common experience with SSL/X.509 signing keys--they're encrypted by default, so any server you configure with your ssl.pem file will prompt for a passphrase on startup, and only keep the decrypted key in memory.

That's why this line, that most SSL providers give in their instructions, is so dangerous:

    openssl rsa -in server.key.org -out server.key
It strips this (rather useful) encryption, giving any successful attacker to your frontend server your private key, in exchange for letting the node come up non-interactively.

This is what ssh-agent does, and I could only hope Linode was using something similar. I think the statement suggests they did.

> The private key is itself encrypted with passphrase encryption and the passphrase is not stored electronically

So even if the hacker bragged about obtaining both the public and private keys, the private key was protected and therefore unusable without the passphrase.

At this point, we can all hope linode used a very complex passphrase... My gut feeling is that they did.

Storing a passphrase (or unlocked private key) in memory is storing it electronically.

Once the attacker has root, they can dump the memory and access this information. The attacker claims to have gigabytes of data, which could well include a memory image (or swap partition image, if the program that stored this in memory didn't lock the memory to prevent it being swapped out).

Attacker claims to have done so in this irc log: http://pastebin.com/7WXRDyAg

> 07:52 < HTP> the CCrypter class of the linode application context was accessable from outside the wwwroot using undocumented ColdFusion methods. i was fully able to decrypt the ccs using the in-memory privkey that they supplied the password for. > 07:53 < HTP> coldfusion runs as a single process, and its memory can be accessed using the ColdFusion wrapper

Can anyone explain better how the passphrase is used to encrypt the private key? Is it AES?

That doesn't make any sense. It's exactly like saying: "Oh sure, the credit card numbers were encrypted but if the encryption key is brute-forced then it's game over".

The CC numbers were encrypted using a public key. The corresponding private key was itself encrypted using "[a] complex passphrase [that] is not stored electronically". They could be lying, but if they are not then this is perfectly valid security.

Using this scheme, you can store new CC numbers any time you want using the public key. You'd have a regular billing cycle(say a month) with billings happening in a short time window(All my linode accounts get billed within minutes of each other so I think this is the case for linode). When you want to bill your customers you'd input the passphrase which wouldn't be stored anywhere, would be used to temporarily decrypt the CC numbers and bill everyone. Unless you are attacked in that window(or an attacker gains access before that window and can remain unnoticed until it happens, and even then they'd need root access to read the memory and not just access to the filesystem) then the CC numbers are pretty safe against any attack that leaks files/databases/etc...

> When you want to bill your customers you'd input the passphrase which wouldn't be stored anywhere, would be used to temporarily decrypt the CC numbers and bill everyone.

I've had account changes billed to my card within minutes, even (and especially) in the middle of the night (when I do most of that stuff). I'm having a little bit of trouble imagining that someone's sitting in front of a terminal and typing a passphrase into a prompt every three to five minutes, all day long.

If I were designing the system given the constraints specified, it would look a lot more like sudo's timeout implementation, or one of the uncounted zillions of encrypted filesystem implementations over the years.

An encoded filesystem doesn't mean the sysadmin types in the passphrase every time something hits an "open()" call, its (usually) just once at boot time. Or sudo can be configured so you get to authenticate every hour not every time you run a command.

Also to be honest I've never had a problem with linode, combined with I've heard they've got pretty good support, so if there is a dude on duty at 3am he probably has nothing to do other than type in a password. So that would be an alternative, simple architecture. Now where it gets hard to justify is when there's enough traffic to pay a dude to sit there typing in the password all day, but they're apparently not that big.

(edited to add I hope they didn't actually use encfs and/or sudo, I was using them as an example of a user interface not as the final answer for how it would be implemented.)

Interesting. Whenever I make changes that affect how much I'll be billed, the amounts get added or removed and take effect when the card next gets charged (which happens once a month). At least that's how I remember it... going to dig through some old statements now to verify this as its been a while since I last changed anything that would alter what they charge me.

I had to go back further than I thought to find an off-cycle charge, but I did find one from January 2012, for a plan upgrade, 10 minutes before midnight, for $6.77. It looks like that was the last plan modification I made.

They may've changed their billing system in the last 12+ months.

I haven't got online statements going back far enough to check right now (will have to do it with paper copies when I get home if I can be bothered). You're probably right though :) I guess they either keep the passphrase in memory (someone enters it once on boot) or another possibility is they have a couple of billing windows each day where they batch through whatever charges/refunds have accumulated since the last run. Feasible I guess with 2 or 3 specially cleared staff members on rotation.

You make a good point

My concern is that if the password was poorly chosen, the private key won't be very safe if they got the private keyfiles. That many credit card numbers is a huge target so it'd be worth an attacker expending some money on.

If it's a strong, long non-dictionary passphrase, then I'm not nervous. But, we don't really have any idea if it is or not. The fact that their web interface boxes had the private key on them is a huge concern - you'd ideally run all billing stuff on a machine that is highly locked down, perhaps not even accessible outside of a very strict range of IP addresses. The practice of leaving the private keys hanging around on the same boxes that ran the website makes me awfully nervous about the other links in the chain.

My thoughts exactly. Then again, I'd expect them to use a rather strong password for sensitive data. Considering 10^12 tries per second (which is probably very far fetched) and a 64 character alphanumeric password, my credit card -- probably the whole galaxy -- will have long expired until the password can be expected to be cracked.

At that point, it really depends on how strong their passphrase is. If it's a dictionary word, we're pretty much fucked.

I agree with you. Besides, how do they use the private key? Are they using a GPG agent o anything like that? Do they enter the password every time they need to use the private key?

Having the private key in the web frontend sounds like a bad idea (as this incident has proved), they should use a backend system that is not accessible from the Internet. I don't know how they system works but the web frontend needs to encrypt CC numbers but I'm not sure it is required to decrypt them as that could be performed in a more restricted environment.

Security incidents suck, but the good news is that they now have a chance to improve their security.

Even then, attackers with half a brain (and malicious intent) would also have backdoored the payment page to harvest "fresh" credit cards numbers during the period they went unnoticed (possibly days/weeks). It's another good reason to leave the credit card handling to a reliable 3rd party imo.

The hacker has claimed the public and private key were both stored on the same machine and both accessible. The only protection left is the passphrase then. Various people have reported fraudulent activity on their CCs.

Personally, I think Linode didn't mess this up as badly as many other companies. The statement took a while to appear but it's there.

As a Linode customer myself I can neither report anything unusual on my box, nor activity on my CC. Thus far I'm somehow inclined to believe Linode when they say no CC information has been compromised. Either way, they are offering a great service and I'll stick with them. Worst case, I'll have to get a new CC.

With the amount of customers Linode has, I'm quite ready to chalk that up to people freaking out because they use their CC on Linode and not realizing they also use it on 3 dozen other things that might be the cause

> Various people have reported fraudulent activity on their CCs

Most likely not even related

Well, lets think about it a bit. It turns out the numbers are ridiculously hard to find although I'm a pretty good searcher/infovore. Better info would be appreciated. Anyway the best I could find was two claims:

There were 11 million american victims of identity theft in 2011 (outta 400 million) for a ratio of about 1 in 40 of the general population gets p0wned per year. Obviously not all ID theft is CC theft but I'd guess a lot of it is. Based on friends and family experience getting a CC number stolen every four decades of use passes the smell test so I will go with that.

A quote from prweb.com found by google: "Started in 2003, Linode has grown to over 45,000 customers". I wonder what year that quote is from. Still "forty five thousand" is probably not totally ridiculous. It makes sense looking at their hostname scheme, ip space allocations as per whois, and some educated guesses. Lots of people including myself like linode.

So assuming the average linode customer is the same as the average joe6pack then 45000/40/365 = about 3 linode customers should get p0wned per day regardless of any linode problem. To make my inner EE happy I'll call the "noise level" about 3/day and evaluate the SNR based on that noise level to see if there's a signal of p0wnership.

Obviously the two reports over the course of a couple days is not 100% of all linode customers who got p0wned. But it does show that two reports doesn't really prove anything.

Now, as a made up example, 900 reports over 3 days would be a HUGE indicator "something" happened. But at a predicted noise level of about ten or so over three days, two reported is down there in or below the noise. So, as a long term linode customer I'm not freaking out (yet). My CC does email me every time a charge is made, and nothing weird has been seen. All quiet on the western front.

Nicely done.

I've seen two mentioned in each of two recent Linode threads, potentially the same people. Have there been many more?

It'd be worth having a read of the logs, if true it appears that the attack was only to compromise a specific target.

The fraud won't occur till the database is released and the private key is cracked.

The update has quite a frank and an apologetic tone to it. Especially the concluding paragraph gives it a very empathetic touch. It must be truly tough for the ops folk at Linode to have suffered an attack due to a third party 0-Day exploit. It could happen to any of us really.

On a side note, I am not sure of the "some occurrences of plaintext Lish passwords". Seems like quite a goofup on Linode's part.

Adobe stuff is best assumed vulnerable at all times though.

It's not really a zero-day, if a patch has been out for a week though.

It was 0-day when they got hit. Sad thing is that they do not tell how much time has passed since the hack.

I'm not sure, their wording around it is a bit convoluted

"... previously unknown zero-day vulnerability in Adobe’s ColdFusion application server. The vulnerabilities have only recently been addressed in Adobe’s APSB13-10 hotfix (CVE-2013-1387 and CVE-2013-1388) which was released less than a week ago."

To me that reads like 'it was a former zero-day exploit", as all exploits are.

Linode brought this issue to Adobe's attention. It's pretty clear that the hack happend 1-2 weeks BEFORE it was announced to the public on Friday. See: http://seclists.org/nmap-dev/2013/q2/3

seclists.org was one of the sites hit in the hack. See how they were down "over the last week". Also note that the Adobe security bulletin was release on April 9: http://www.adobe.com/support/security/bulletins/apsb13-10.ht...

So ya, these hackers found the problem (a 0-day in Coldfusion), used it against Linode. then Linode got Adobe to look into it and found the cause.

Not all exploits are zero-day exploits. As I understand it, a zero-day exploit is one that is used in the wild before it is disclosed. Plenty (most) vulnerabilities are only disclosed once a fix has been implemented.

I stand corrected

Yeah, maybe you're right. The text is a bit unclear about that.

Yeah, they didn't tell us when they got hacked, but seclist did (3/31/2013): http://seclists.org/nmap-dev/2013/q2/3

3/31/2013 is the backup they're reverting to, not when they got hacked though, right? (The email refers to "downtime over the last week", so I guess the hack happened around 6th April or later.

Sorry but having passwords in plaintext is not a "goofup".

It is completely and utterly inexcusable.

A) It wasn't account passwords. It was LISH (no better but I thought I'd point that out) B) Not all passwords were in plaintext C) Judging from the above, I would say that there was some stuff up where passwords were written to the logs when they were changed but that's just speculation.

> It could happen to any of us really.

Yes, but the whole private key thing and the ability to access the cc numbers from the front-end (unless they hacked the DB server as well, I think they didn't?) reeks of failure.

The only thing standing between the plain text cc numbers and the hackers is the passphrase, better pray that it's a good one.

It seems seclists was affected http://seclists.org/nmap-dev/2013/q2/3

I wish PCI compliance required companies that take credit card info for any sort of recurring billing declare if they store your credit card information or if it is stored for them by their gateway/merchant account provider.

There are a thousands of very small SAAS services out there that bill monthly which I assume would take advantage of gateway stored credit card info, then you find out that a comparatively larger more trustworthy player like Linode doesn't.

Similarly it would be nice to also have them declare if they use something like Braintree's transparent redirect such that they never even see the card numbers.

Who on earth uses ColdFusion any more? Surely when they re-built it a while back they could have used something more secure like Django?

I agree with questioning their choice of ColdFusion, but I have some reserves about your proposed alternative to use Django. I don't know how secure Django is, but I know it's big. And my perception is generally that anything that big is bound to have a few loose ends, if looked under the magnifier.

I can't believe they don't address the claim that the public and private keys were stored in the same place. It makes me wonder if that's actually true, which knocks my trust down just a little bit more.

latest claims on #linode by HTP http://pastebin.com/7WXRDyAg

"Along with the encrypted credit card, the last four digits are stored in clear text to assist in lookups and for display on things like your Account tab and payment receipt emails."

I really don't see the need for that at all. What sort of credit card "lookup" are they doing exactly?

I guess it's mainly to display stuff. When you have multiple cards on Amazon you can choose which one to use, however without seeing the last four digits its pretty hard to tell which is which.

It's not for them, it's for users. It helps to know which of your CCs is going to be charged.

It is fairly standard. Go look at your Amazon account, you'll see the last four digits.

I think the only thing that could somewhat restore the trust in Linode would be a third-party security audit from a reputable company.

Hmm. I'm sort of re-assured, but what's happened to the claim that this was targeting a specific customer?

It seems HTP went after nmap.org machines, so I'm guessing that woudl be the directed customer they picked?

Do you have a chatlog or citation?

I'm not arguing, I'd love to read it! :)

Here's the writeup from seclists.org: http://seclists.org/nmap-dev/2013/q2/3

It looks like seclists.org and nmap.org are hosted on different IP addresses, but both route to linode, and both have the same WHOIS information, so I think it's safe to say the writeup applies to both.


By the sounds of their writeup, the attacks just visited the "highest profile" sites as per the Quora link to obfuscate their real attack.

Its not a "zero-day exploit" if there's been a patch out for it for N days, where N>0, right?

Exactly. Zero-day were only if the attackers gained access before these N days. Did they?

Should CC info even be stored in the customer database? I would have thought that information should be write only. Does PCI allow that to happen with only PKI in place?

Most merchant account gateway providers have some sort of tokenization scheme to allow for recurring billing. The token is tied to transferring money from a specific credit card to a specific merchant account so it's not useful for 3rd parties to gain access to, unlike raw CC data.

I agree to your position of not storing the CC in the consumer database. However, it cannot be in a write-only state since usually consumers have to billed periodically without having to ask for their number.

Ah, my description wasn't the best, apologies. What I meant was the payment backend, a separate server, could only service write requests from the frontend. The backend itself would possess decryption keys to make charges, but the frontend could only ever ask it to bill a customer or change the information via a write-only API.

Having said that, if the API were provided via ColdFusion it wouldn't have helped anyway.

That is in fact what was done in an application I worked on. We used ColdFusion for both the Front-End and the "Commerce Server" and the commerce server only took credit card information from the front-end and could not give it back. It was behind its own hardware firewall and any exploit in ColdFusion would have been hidden from the front end unless it could be exploited through the API methods we exposed through the gateway. To gain access to any other url endpoints you would need to access the server from within the firewall by connecting to a VPN gateway on a separate physical network that was only connected by a dedicated T1 to our office. We did this before there was PCI, because it made sense.

The whole system has now been rewritten in dotNET. It would be interesting to see if I can find that old CF app. It ran solid for almost ten years on the same 3 NT4 servers handling tens of thousands of users per day. And it could have probably run on one server except that we isolated the commerce and MSSQL functions from the front-end app.

Thankfully, you leave it up to someone else to vault your PCI if you keep having intrusions.

To be honest, I'd rather have my credit card number in the wild than be worrying about my sites/servers being compromised. Obviously neither is ideal, however.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact