Hacker News new | comments | show | ask | jobs | submit login
Why LivingSocial’s 50 million password breach is graver than you may think (arstechnica.com)
45 points by darxius 1518 days ago | hide | past | web | 52 comments | favorite



>LivingSocial engineers should be applauded for adding cryptographic salt

Should they? Should they really? Should we also give them a cookie for properly dressing themselves in the morning?

Uniquely salting and hashing passwords is the bare minimum for acceptable security practices. It is not a sign of expertise, but of competence.


I don't get why they used SHA-1. I mean, LivingSocial has been sending people to speak and attend a lot of developer conferences. They hire a lot of bright people. Did none of the Rails devs raise an alert of like "Uhh hey guys, why don't we use Rails-community standard bcrypt?".

I suppose it could be corporate bureaucracy or something, but it seems weird that a company with at least 30+ developers (based on public members on the github org) no one would point out that this is a Bad Idea.


I could think of a few reasons:

1. user management is a pretty trivial thing to code and one of the first things you do when you set up any site. It could have been a simple oversight before a rushed launch.

2. Playing off of 1), once you have any users in a database using a particular system, you have to go through a bit of a process to convert people over (you can't convert from one hashing algo to another yourself since you don't have the original passwords). That involves creating a concurrent password table, creating login hooks that force the user to upgrade their password on login (especially a pain if they're using native apps -- i'm not sure if they are), etc. Making sure that 50 million users migrate over without a hitch is a potentially site-breaking undertaking that could have serious impact on the site's success. Given the rocky climate LS currently faces, you'd have to have nerves of steel to lead a n initiative like that.

3. Depending on how LivingSocial has their code management structure set up, there may only be a few devs who actually have access to the user login system (you really don't want an intern or freelancer to be able to modify those). Given that the same devs who have that access probably have more immediate issues to deal with day-to-day, something like the current hashing algorithm might not be on the top of their priority list.

Not that any of these are very good excuses, but I could see how it could happen.


I do not agree with your #2 regarding making users re-create their password.

Assuming you haven't been hacked yet the old password is still secret. Therefore you modify your login system so that when the user next logs in:

* Check password against old hash

* If old hash exists and matches user's password then re-hash the password using new hash and save new hash in new field

* If old hash does not exist then just check against new hash only.

Run the above code for X time (X being the time it takes Y% (Defined by company) of your users to login at least once.

After X time THEN you start forcing the remainder to switch over.

Done properly for the vast majority of your users should not even know you have changed anything.

[EDIT] - With some extra code you could probably avoid having to create a new field for the new hash by just checking against one hash field using old algorithm then new algorithm. This would avoid a potentially costly DB migration.


It's much safer to just hash twice: the first time using your old hash function and again using a secure hash. Then to migrate you only have to update all your customer's password hashes in place. Additionally, if you lose your database, you don't lose all the old, weak password hashes.


Ooo, I hadn't thought of that! For some reason, I had it in my mind that some insecure hashing algorithm as the base would result in an insecure resulting hash, but of course that all goes out the window when your whole password table is out in the open.

I'm glad you mentioned "safer", because for some reason rurounijones's approach made me nervous, but I couldn't quite put my finger on it.

Here's some more discussion on the issue: http://crypto.stackexchange.com/questions/2945/is-this-passw... (I found that from here: http://stackoverflow.com/questions/3955223/password-hashing-...).


User management is not so trivial that the repercussions of doing it wrong are trivial (exactly as we're seeing now).

Someone with a bit of experience in this field would know to include the hash algorithm in the stored data when processing a password set/change/reset. For example, good old Netscape LDAP server stored passwords in the general form "{ssha}xxxxxx", "{crypt}xxxxxx", or "{plain}xxxxx". This allows for easy comparison as users login, as well as for gradual upgrade to stronger algorithms in the future when accepting changes--one could extend with with "{bcrypt}", "{pbkdf2-10k}", etc., etc.

Having only a small number of trusted ignoramuses working on the code is a pretty lame defense. Honestly this stuff is in the news all the time. How could nobody at such a massive site raise the question, "if our database is compromised, will we be exposed as careless oafs?"


I agree with you completely. I was hoping people would take my comment more as important lessons than excuses.

1) Always review the code for sensitive data before allowing it to go live (especially by someone well-versed in security).

2) Have a plan of attack in place for security upgrades so that they're not quite so painful.

3) Despite day-to-day responsibilities, there need to be regular reviews of security practices by management.


If I had to guess - it was done long ago when SHA-1 was what you used because it wasn't MD5, and nobody thought to change it since it was working.


Maybe - but you would think that a company with "Tech Leads" would think to check that kind of stuff. Or some new hire would come in and ask innocently "Hey what do we use for password hashing".

I find it pretty hard to believe that no one would ever poke around the User model in a Rails app :)


There has really never been a time where SHA1 was appropriate.


Sure there was - before the debut of high-powered GPUs. Try cracking SHA1-hashed passwords on CPUs - even modern ones - and let me know how far that gets you.

By comparison, today's bcrypt passwords at today's work factors are tomorrow's ASIC snacks. Doesn't mean that bcrypt at reasonable work factors are wrong today.


Hearing about all these recent hacks (LinkedIn, RubyGems, LivingSocial) really makes me nervous and wonder how I can avoid hacks. Are there any details on how these systems were hacked in the first place, so that the rest of us can learn from the lessons? I already use as many security good practices as I can (bcrypt, processes and apps running under different user accounts and database accounts, read-only accounts, principle of least privilege, firewall, fail2ban, key-only SSH logins, file modification monitoring, forwarding logs to a log host, etc) but I can't shake off the feeling that I may be forgetting something.


Who is able to physically access your database hardware? Can you or any of your employees walk up to it? If so, who? If someone had a gun to your head and demanded your username and password list or they'd shoot you, how would you get the data for them?

Who knows the sensitive database passwords? Where are they stored? How often are they changed? Have you fired anyone recently? Was their access revoked and were all passwords changed?

Things you didn't mention as precautions are protections from SQL injection attacks (I'm assuming you're using prepared statements) and that you're using huge passphrases, along with making sure that your applications are up to date with the latest security patches.


"and wonder how I can avoid hacks"

(I'll leave it to others more knowledgeable to answer this but keep what I say next in mind.)

My question to you is, how big of a target are you?

I mean the security you would use if you are Madonna is not the same as a "B" list actor or the person working in the local coffee shop.

You can put untold hours into this but unfortunately security is a full time job and can also detract from other things you need to be concerned with that might be more important for your particular situation.

"I may be forgetting something"

Sure but the same concern is in play when taking a trip or with anything. What are you forgetting? Where are you going? How easy will it be to solve the problem and how much impact will forgetting dental floss have? Are you going to an uninhabited island or Manhattan where you can buy almost anything 24/7?


What you're advocating is bordering dangerously into the realm of "security through obscurity" - a very bad principle to base any kind of security on.

While the former member might not be a high profile target for the moment (and let's assume that he isn't for the sake of devils advocate), who's to say that next week a story hits the news about his wife or children or even a company he's affiliated with and then reporters decide to start poking about on forums and social media for cheap information to write sensationalist reports on? Or that hackers use that news as an excuse to target the poor guy?

Or even who's to say that next month he annoys someone on HN who decides to play some pranks?

While you are right that that security is a full time job, you should never advocate the numbers game because as remote as the odds might be, people do win the lottery.


I think both of you and your parent post's points are well-taken. IMO you should do all that you can to secure your site within practical reasoning.

The LivingSocial password hashing debacle is an example of a security failure that pretty much every website in the known universe could improve on by changing a single line of code -- that's what's so frustrating about it in the first place.

There are cases, though, where certain security measures are probably not needed. For example, I would probably not recommend that my mom use a keyfob security token to log into her blog admin page (although I'd probably offer to make one for her with an Arduino as a weekend project lol), or hire armed guards to guard her database servers. Enterprises certainly need these measures along with many others.


What you're discussing is how valuable is the data. eg if your mum lost her blog (and kudos to her being tech-savvy enough to run one - my mum certainly isn't!), it would be frustrating, but not really a big problem. She'd just just restore from a back up, change the passwords then back to business as usual. But then you're comparing it against examples where there's confidential information (eg user passwords) which, if stolen, is a massive problem.

I'm assuming from the OPs tone, that his box sits somewhere between the two extremes. However the very fact that he's got an SSH account on it makes the box a much more favorable target than the millions of free blogs on shared hosting services - so even if his box doesn't contain 3rd party passwords and/or credit card information, and even if he can easily restore the box from a back up, he still has the problem that a break in could see his box used to send spam and what not. Many hosting providers will just kill your subscription if that happens.

So while you're right that there's a balance between security and usability, security most definitely comes first if you're administrating your own VPS / dedicated hardware.


"What you're advocating is bordering dangerously into the realm of "security through obscurity" - a very bad principle to base any kind of security on."

I don't believe that is what I am saying. I'm saying you are less of a target if you are "nobody" than "somebody".

Would you agree that it is proper to have more security in place at an event like the Boston Marathon or a major league baseball game or when the prime minister visits then the dance recital at the local school?


> I don't believe that is what I am saying. I'm saying you are less of a target if you are "nobody" than "somebody".

But that's exactly what security through obscurity is!

> Would you agree that it is proper to have more security in place at an event like the Boston Marathon or a major league baseball game or when the prime minister visits then the dance recital at the local school?

If you must use that crude analogy then please remember than In the UK we've had terrorists blow up random buses and trains (7/11) and attack town centres on otherwise uneventful days (IRA). In America there has been frequent shootings at schools without the need for high profile dignitaries visiting. So your own example proves my point that being a seemingly low profile target doesn't mean that you're safe from attack.


I wonder why password tables are readable at all. An application only needs to call a stored procedure that compares the computed hash value against the stored. This stored procedure needs the rights to read the data. No application, no administrator should to be able to read that data.


This isn't a bad idea at all, but if you lose control of your database, you need to assume you lost control of your app servers and your database server.


I suspect it's a side-effect of the fact that web frameworks tend to encourage "pass the database login once and let the framework manage everything", so there is no protection if you manage to get into whatever is running the site.


The "one database login" rule (a.k.a. trusted platform design) doesn't necessitate reading the raw password hashes; the app account can be granted only permission to run a stored procedure that does the comparison yielding a yes/no answer, and denied 'select' on the views/tables/etc.


You should remember to use constant-time comparison when checking for equality in your stored procedure though; do databases typically provide functions/operators for that?

It's something just as easy to get wrong in application code, but good frameworks already do, and abstract it away where you might not realise.


The issue is how you enforce this. Obviously if somebody has root on your server they can just disregard any rules for how the database should be accessed.

Maybe by having the database on external hardware with an API, but I can imagine this is hard to scale. What if you need to sync it across several load balancing servers, or what if you want to make backups?


>> what if you want to make backups?

One can design a special hardware, with a port for just comparing passwords and a port for copying. A Hacker would need physical access to copy the database.

>> Load balancing

Could be part of the design.Say card in a box, with internal physical connection inside the box for copying.


> Obviously if somebody has root on your server ...

... then it's game over for you. But I guess intruders typically use either SQL-injection or somehow get the application password to access the password table. In both cases an unreadable table would thwart the intruder.


But is it?

If the only way your production server can check passwords is by sending hash(salt+pw+salt) and getting a true\false reply from a piece of external hardware...then even if an intruder got complete control though some exploit or injection, he wouldn't be able to dump the database for later cracking with rainbow tables.


You may have just re-invented the Hardware Security Module(HSM)[1] :-)

See also [2] for a cute implementation using an Arduino to secure an AWS API key

[1] https://en.wikipedia.org/wiki/Hardware_security_module

[2] http://stefan.arentz.ca/signing-aws-requests-with-your-ardui...


postgres provides encryption functions but you still have to take care of your keys.


Are tech companies that store a fair amount of personally identifying information on millions of people ever subject to security audits? Not saying it would help, but it seems like as attacks get more prevalent and old security practices become obsolete that users might be able sue companies for negligence by not using optimal data encryption. Of course the tech company would counter sue for stupid passwords. And even if they want to restrict users from dictionary passwords with symbol replacements, hey, it's patented.


I think in N. America if you do credit card processing you have to conform to PCI standards, unfortunately those standards are loosely audited and relies mostly on self-reporting. More than a few of the security protocols are cargo culty and even detrimental to security.


[deleted]


Before you decide to be condescending, you should probably freshen up yourself.

The vast majority of merchants fall into PCI-DSS Level 2-4. Those merchants complete a self-assessment questionnaire (SAQ), in other words, self-reporting. They also have to pass a rather rudimentary quarterly automated scan from an approved vendor.

You don't get to external auditors until you get to Level 1, which is a vanishingly small percentage of the merchants in the world.


My understanding is that the first time you are big enough and screw anything up (leak some PII, mess up your privacy policy, etc), the FTC gets involved on behalf of the citizens, and then you settle with them, the terms of which include ISO-whatever compliant audits for the next few dozen years. So Facebook, Twitter, Google, Dropbox are very likely getting audited yearly. I don't know whether e.g. Pinterest has screwed up yet, so they might not be.


One way they could have upgraded their database to make it more secure: http://blog.jgc.org/2012/06/one-way-to-fix-your-rubbish-pass...


I would vote for PBKDF2-HMAC-SHA512. The SHA512 part makes it a little harder to do in GPU as key lengths are 64-bit vs 32-bit for sha256.

Pbkdf2 has been recommended by NIST, while the main crypto algorithm for bcrypt hasn't seen as much attention (though it has been around for a while and is probably safe).

Edit: Though, it looks like PBKDF2-HMAC-SHA512 is not FIPS compliant.


So, this isn't a bad recommendation, in that PBKDF2-XXX for any XXX is much better than "salted hashes", but I'm confused about this meme that recommends PBKDF2 above bcrypt and scrypt, both of which are better, marginally in bcrypt's case, and significantly in scrypt's.

SHA2-512 is designed to be fast on hardware.


True true, all of them are better than just salted hashes. It is much harder to brute-force bcrypt and scrypt in hardware (ASIC, GPUs, etc...).

It's just important to understand the differences and trust behind the different algorithms. PBKDF2 is NIST recommended, which I would say speaks to its strength (unlikely it will be easily cracked). It can also be used with any input size. Though, calculation requires little memory, so it can easily be implemented in hardware.

bcrypt has been around since 1999 has is still going strong, but is limited to input sizes of 55-characters.

scrypt is fairly new and I don't feel has been wrong long enough to be vetted by the security community.


You say it's "unlikely it will be easily cracked", but, obviously, it's demonstrably easier to crack than bcrypt and scrypt.


SHA-1 and SHA-256 use 32-bit word sizes which makes them very easy to implement on current GPUs. CUDA and OpenCL allow for computation on 32-bit floats very quickly, but neither can easily do 64-bit floats right now. Which is why I said originally that SHA-512 would be preferable right now (due to 64-bit word sizes for its keys).

I don't follow GPU tech enough to know if they plan on implementing 64-floats any time soon, so commodity hardware to easily brute-force SHA-512 may not be around for a while. Though, they can still implement SHA-512 calculations on a GPU a lot easier than they can for scrypt or bcrypt.

Now, if you use PBKDF2 with a large iteration count (lets say 20000), it will make the time to calculate the final solution a lot harder. You could give an iteration count to PBKDF2 to have it take the same amount of time as bcrypt would take to run. The only thing you can't do with PBKDF2 is adjust the amount of memory required to calculate the solution.


Though, they can still implement SHA-512 calculations on a GPU a lot easier than they can for scrypt or bcrypt.


> bcrypt has been around since 1999 has is still going strong, but is limited to input sizes of 55-characters

You can SHA512 the password before feeding it to bcrypt if that's an issue.

55 character passwords cannot be brute forced anyway.


Everyone should be salting and hashing correctly - it's trivially easy to implement and the cost is trivial. However, once the user database is stolen in its entirety, there are other factors that become as important:

- Did the user use a common passwords? If they did, then it will quick to brute-force.

- Did the user reuse the same credentials on other sites? If they did then the value of recovering the password becomes worth plowing through well-salted scrypt.

The real problem is that far too many people use the same credentials for LivingSocial and their Gmail.


It was my understanding that hashing and salting is now obsolete, and doesn't really increase security that much. AFAIK, the best 'recommended' way currently is to use bcrypt.


My understanding was that bcrypt uses a salt and is a hash.


In the same way the US Army consists of a soldier, so too is bcrypt a salt and a hash.


Your understand is completely wrong.

Plus bcrypt is used as a hashing function and includes a salt. So even if what you said was true, it would make bcrypt obsolete.

Yes, I know, bcrypt uses a block cipher but block cipher can be easily used for cryptographic hashes (see DES for one example).


Yes, bcrypt is used as a hashing function, and includes a salt; but it isn't what is usually meant when "Hashed and salted" are suggested - the salting part becomes part of the algorithm used, rather than something you specifically have to implement yourself.

From the perspective of the person using it, it's just a single hash. Saying "Hashing and salting" works is not really any better than what people do already, because, e.g. the passwords in the case discussed in this article WERE hashed and salted, but with a relatively weak hash.


No, in fact they are a part of bcrypt, along with other great things.


Sorry if this has been asked, but I haven't seen anything on it yet. What about users that only signed in via Facebook? Any actions I need to take?


Does anyone actually know how to close a livingsocial account? I scoured their site but saw nothing that pointed me in the right direction.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: