Hacker News new | comments | show | ask | jobs | submit login
LivingSocial Hacked – 50 Million Customers Affected (allthingsd.com)
157 points by dcu 1607 days ago | hide | past | web | 120 comments | favorite


If the LivingSocial hashes do end up leaking will folks who work on cracking them pretty please record and publish their crack rate as it changes as progress is made over the db?

We need these kinds of records kept on real-world events in order to do retrospective studies.

TIA :-)

Yes please do. And then afterwards somebody will be able to use that to incriminate you.

People have been cracking passwords and releasing stats for something like two decades now. Point to one of them who got prosecuted.

Would you agree though that releasing the info might put you on the radar or up your "watch list" karma?

I understand your point though. Similar to when I hear people on HN make statements of what could happen in business, with the IRS, or their city government (say with inspections) but from my years of experience I laugh because it is an extremely rare occurrence. "You can't claim the laptop you bought that your wife uses as a business expense if you get audited you are in deep shit!!!"

The only radar that matters is the one you put in your own head. Fuck the radar.

No, I really doubt that it would put you on any kind of watch list.

Simply cracking hashes that somebody else already leaked shouldn't be a crime and it's a common activity for network defenders and security researchers.

Some fundamental concepts used by defenders like "password strength" are subtly dependent upon the abilities of a putative attacker, yet we know that attackers are evolving their skills with every new breach. Thus in order for defenders to reason intelligently about their security and resolve the numerous trade-offs they face, we need data from the widest possible variety of sources when these real-world events occur.

You made this account just to make that comment? I suppose you knew you were in for some downvotes.

They said they Hashed and Salted password so it's unlikely the hackers will get "actual" passwords by brute-force

However what I've seen happen after this attacks is usually they attacker use the e-mail addresses to do phishing attacks and just get passwords that way. They already know their e-mail and that they are living-social customers. Expect a phishing e-mail that looks like coming from living social.

That's good. However, if a good proportion of people use one of the 1,000 most common passwords, then they can hack those accounts in the time it takes to compute 50M * 1,000 hashes. With CUDA, it seems that you can do hundreds of millions per second. At 100,000,000 per second, you could compute that in 500 seconds (under 10 minutes). http://www.golubev.com/hashgpu.htm - this claims into the billions per second.

When were talking 5B tries per second, that means trying the most common 100,000 passwords against each of the 50M accounts in under 20 minutes. The most common 1,000,000 passwords against all 50M accounts in under 3 hours; the most common 10M passwords against all 50M accounts in a little over a day.

Hashing is good, salting is better, but unless there was a work factor involved like PBKDF2, bcrypt, or scrypt, it seems like it's protection against people who don't know what they're doing more than against people who know what they're doing. I'm not the type to say that we need to protect against people who have the money to make ASICs (app-specific integrated circuits likely used by governments), but I do think protection against nVidia chips is warranted.

Now, it's genuinely possible that by "hashed and salted", they mean they used bcrypt or PBKDF2 (and simply aren't giving details in the email). But, if it's a salted SHA1, I think phishing would be harder than cracking a substantial proportion of them.

> ... then they can hack those accounts in the time it takes to compute 50M 1,000 hashes*

That's why you might want to use Bcrypt instead of SHA-1 or SHA-256.


Maybe I'm misunderstanding (entirely possible!) and I think your point still stands, but isn't it 50M * 1,000 hashes * # of possible salts?

EDIT: Just realized that the salts have to be stored somewhere and the attacker probably grabbed those as well. I think that answers my question.

No, because the salt is stored with the password. Salts are used to defend against reversing a password hash into a password, but they don't appreciably impede bruteforcing passwords into hashes.

It does mean that for a large dictionary attack things will go a lot slower. Rather than one computing one hash and comparing it to 50M hashes, the same word must be hashed with 50M different salts and then compared with their respective hashes.

You can rent miners on the bitcoin network to use their pooled hashrate to break passwords, same with moxie's service cloudcracker.com

That's a more specific way of phrasing reversing the hash into a password, yes. ;)

Salts are used to make rainbow tables useless.

I'm curious why I've never heard anyone suggesting to make a public/private key pair, burn the the private key, and just encrypt passwords with the public key as your hashing function.

Short of finding an exploit in GPG, you'd have to crack the public key used which would be near impossible if a long key was used. This assumes you keep that key safe.

So why not literally make a key pair and throw away the key?

At best, that's just a hash function, so no matter how long the key is you can still just guess passwords and do trial encryptions just like you would with a hashed, salted password.

If you used the same key for all your site's passwords an attacker could build a rainbow table so you don't even get to not salt.

It would be a lot slower than a normal hash so it'd be harder to brute force, but you can get the same protection in a simpler, more predictable system by just using PBKDF2 with enough repeated hashes.

The situation is a bit more complicated than that : http://security.stackexchange.com/a/6415

I don't subscribe to one "scheme" for passwords since the easiest method (to implement, not crack) will be the avenue malicious hackers also pursue.

This is why most of my custom jobs involve an application specific salt per installation, in addition to the user-unique salt + password hash and then CFB encrypt the salt using another app-specific password + username hash.

I'm not saying PBKDF2 is ideal, I'm just saying that it has all the possible advantages of the proposed GPG based password storage without any of the weirdness of using crypto primitives in a way they weren't intended for.

people actually pay you just to make shit up?

Yes, actually. And to be more civilized when I disagree with them.

Edit: My standard response


I can't disagree with your sentiment... there comes a point where a hardware security module makes more sense than extraordinarily convoluted, tortured concealment of the salt and hashing mechanism.

It was perhaps unnecessarily blunt, but that is the root problem with most of these leaks: Someone who doesn't know what they're doing implementing a scheme that is only slightly more secure than storing the passwords in plaintext.

It turns out LivingSocial was actually using SHA-1 with a 40 byte salt [1].

> LivingSocial never stores passwords in plain text. LivingSocial passwords were hashed with SHA1 using a random 40 byte salt. What this means is that our system took the passwords entered by customers and used an algorithm to change them into a unique data string (essentially creating a unique data fingerprint) – that’s the “hash”. To add an additional layer of protection, the “salt” elongates the password and adds complexity. We have switched our hashing algorithm from SHA1 to bcrypt.

A 40 byte salt? "_additional_ layer of protection"? "elongates the password"? It's clear they thought they were implementing extra security (hey, let's use a 40 byte salt instead of 16, mega protection!) but failed miserably because they did not know what they were trying to combat.

[1] https://www.livingsocial.com/createpassword

A similar method is a "pepper", a form of salt common to all users and stored in the application configuration, allowing the passwords to resist attack even if the hash and user-specific salt are lost. The reason it's not often used is that the assumption is that if your database is compromised, any other commonly-used secret keys on the server will be too. It can be useful for defense in depth, though.

Serious question: How is "pepper" any different than encrypting a traditional salted hash using a symmetric cipher as if it were any other kind of data?

Maybe because encryption using public key is much more CPU intensive and slower than hashing? Since you have users logging into the system all the time, you would need to encrypt the typed password in order to compare it with the record in the database, and this could put a great load on the server CPU if you have many users logging in.

That is the goal. Slower hashing is harder to crack, is more secure.

Login load is nothing compared to the rest of the app.

Sounds as if you're suggesting increasing the work for the encryption/decryption? If so, there are well known ways of increasing the work and they're designed specifically for hashing [this kind of data]. I suspect the reason people haven't used pub/priv for such uses is that there are better algorithms for it.

It's actually very possible for hackers to still decode many of the passwords.

An attacker could simply put together a dictionary of 10,000 common passwords then hash them against each user's salt value and see they they get a match for that user's password Hash. Assuming Hashes are stores as SHA-256, with a GPU hashing setup they'll be able to scan through 50 million users in very little time at all.

I have a dumb question about this. It was a smart question when someone asked me it the other day, but it's a dumb question now because I feel like I should know the answer and I don't. Why can't you get away with using some trivial but obscure modification of one of the standard fast hashing algorithms? It will be just as vulnerable as the standard algorithm, of course, once you know what it is. But now the attacker has to figure out which algorithm you modified and how you modified it. How do they do that?

I get that this is a bad idea that won't work, blah blah security by obscurity and so on. But when I was asked why it doesn't work, I was unable to give a very satisfying answer.

These things are really hard: http://en.wikipedia.org/wiki/National_Security_Agency#SHA. Here we see the NSA suggesting a modification to SHA-0 to make SHA-1. For years, incredibly smart mathematicians didn't see what the NSA saw. So, I'd say there's a decent enough chance that you'd weaken the security. I don't think many people on here would have the crypto chops to make their on security algorithms - people like cpervica are the exception and even they want to publish their algorithms.

I mean, there is a certain logic to what you're suggesting. Let's say you're cpervica and you know what you're doing. You make mistakes like any human being, but you know crypto. So, some cracker gets access to your database. OK, they're probably not smart enough to find a weakness in what you've done and might not even be able to figure it out. But there might be a weakness there.

Plus, you have to think: what if someone gets my code and my database? Then they have the modification you made. If the modification doesn't require more computation, then it's just unknown without the source code. So, with the source code, we're back to the trivial to cracking case.

The thing is: there are solutions out there to handle this by making the calculations take longer. PBKDF2, bcrypt, and scrypt all exist. They deal with this specific problem in a way that even if someone gets your hashes, salts, and code, you're less vulnerable.

tl;dr: with the obscure case, you're not gaining protection if they get your code along with your database.

Well if they don't have your source code, why modify the algorithm, you can just use sha1(password + user_salt + site_secret) -- that site_secret just made the sha1 unique to your site. Of course, if they have your source code, then it doesn't matter: they would have your 'site_secret' or your modified algorithm. Better would be not storing your site_secret in a accessible way (not on disk).

Edit: See udk1 below -- he's right, sha1 is an outdated algorithm for this purpose. Poor example choice on my part.

You shouldn't use SHA: it's not designed for hashing passwords. Checkout bcrypt / scrypt and read: http://crackstation.net/hashing-security.htm

That doesn't mean an additional secret stored on the server is a bad idea either way.

I've heard it referred to as a pepper (to go along with salt) and is in the application code.

bcrypt(password, salt, pepper) => hash. So even if they grab the db with the salt, the effective password to crack turns into 'password3jkl453jklgfuja9oph4mn" instead of just 'password'. Impossible.

And if they do get your site's code, you're back to a strong hashing algo and a salt.

It would work, but a) you might break the algorithm in ways that make collisions b) if your code is stolen in a breach, it's useless.

Most people should use bcrypt with enough rounds to make it difficult to brute force. scrypt is also an option, though newer (good / bad: http://security.stackexchange.com/questions/26245/is-bcrypt-...).

The attacker has pairs of hashes and known passwords (for instance, for their own accounts) and only has to figure out what the mapping is for those; the answer then extends trivially to the rest of the passwords.

That's the "theoretical" answer. The real answer is, popping someone's database virtually always --- you know what, let's just assume always for now --- gives you a remote shell, and from the the actual code.

Ah, that theoretical answer does fill in one missing link for me. I kept wondering how you could detect even a trivial modification of a standard hashing algorithm (say, reversing the hash) if all you had to look at was its outputs. It seemed like a massive search problem through the space of all possible modifications, though no doubt there exist sophisticated techniques to apply. But if you have some input/output pairs, the problem seems entirely more tractable. Even I can begin to imagine how one might tackle that.

The point you and others make that if the database is compromised then the code almost certainly is too, is clearly the best answer though. I am now prepared for the next time I encounter my interlocutor at extended family dinners!

In my house we have a rule that we don't discuss password hashing at the dinner table

Cryptographically, you can address the problem you've set out for yourself simply by hashing a 128 bit random number along with the password, and keeping that number a secret. If attackers can get your code, it doesn't matter how you obscure your hash, because they'll have the algorithm. But through trial and error, an attacker might figure out how you tweaked an algorithm; all the atoms in the solar system (or something like that, I can never remember) could be computers trialing and erroring against a 128 bit random number and they'd never figure it out.

128 bits is a big number/keyspace but the solar system is no lightweight, either.

    2^128 = 3.4028E38 
    solar mass = 1.9884E30 kg or about 9E56 atoms. 
You don't need to get down to atoms, you could be crunching 128 bit keys on a single AWS dyson1-medium instance.

256 bits is the one where the hosting costs start getting not merely planetary but intergalactic.

    2^256 = 1.1579E77
    atoms in Milky Way = 2.9E76 atoms

You're presuming that "obscure or trivial modification" to a hashing algorithm would be just as secure (or just as vulnerable) as the original. That is no guarantee. It's entirely possible that you might fundamentally break the algorithm and make it much much less secure.

This secure stuff is hard. Remember when debian made a "simple fix" to open ssh and made all the ssh keys for 2 years brute forcible? Do you think you can do better? It's wiser to leave this to the experts

Debian made a dumb software coding mistake that broke entropy input into OpenSSL's RNG. This is a part of the code that's very hard to test for correctness because by its nature, it must behave unpredictably.

This could have (and often does) happen to any software. It's not a good example of someone attempting to design their own or modify an existing crypto primitive algorithm.

It's a good example of how someone can look at an algorithm/software and not realise what bits are critically important. "Oh this bit I can change without anything bad happening". It shows there are unknown unknowns in security.

Well the Debian maintainer did realize this was important and emailed the OpenSSL developers. But due to a tragi-comedic miscommunication the breaking change got committed anyway.

If they had access to the database, it's not unreasonable that they had access to the code

I think perhaps the satisfying answer you are looking for is: http://en.wikipedia.org/wiki/Kerckhoffs%27_principle

Yes, it's a great idea.

It's called "salting" the password. Everyone does it. It works really well.

Of course, if they can steal your salt (from your source, or a config file) then you lose.

What's the OP is talking about is commonly called a 'pepper'. Some fixed secret in your application code that impacts the generated hash.

Since salts are random, and unique per password, you shouldn't be storing them in a config file.

From what I understand, you shouldn't ever use SHA or MD5 for hashing passwords; bcrypt is a commonly implemented standard and much better than SHA. SHA is designed for hashing data quickly; which is the opposite of what you want to avoid brute force attacks.

I would like to see someone knowledgeable explaining this. Using hash function designed for speed as way to encrypt passwords seems at best not very thoughtful and at worst completely stupid. Why are so many (maybe most?) services using it ?

The short answer is that it doesn't matter because if you live in a fantasy land where users never re-use passwords, always have passwords with sufficient entropy, or you never lose control of your data, doing key-stretching is not necessary or helpful.

The long answer is that security experts are expensive and trying to figure out security systems from first principles is very hard and fraught with disaster. The result is that almost all security practices are transmitted via folklore.

Also, the documentation for security software is often dense and hard to understand, while the documentation for non-security software provides advice that is simple, actionable, and terrible: http://dev.mysql.com/doc/refman/5.7/en/encryption-functions....

* The undocumented-algorithm PASSWORD() function that mysql uses to hash passwords apparently defaults to unsalted double-SHA1

* You are helpfully advised not to use this function to store user passwords, instead you're advised to use MD5 or SHA2.

* Salting is not discussed at all.

* Key stretching functions are not provided or discussed.

Laziness or ineptitude. In case they are using MD5 or SHA1 due to legacy systems then migrating passwords to a new encoding is troublesome at best and impossible at worst.

Many tutorials and reference documents use MD5 and SHA1 in their coding examples, so right from the get-go novice developers are at an automatic disadvantage.

then make people reset their password , force them if necessary.

cryptographic hash functions are crypto primitives (=building blocks), and very useful such. The hash function is simple and fast by itself but can be used in safe constructions. This is completely fine. One example is PBKDF2 using HMAC-SHA-1 and many iterations.

Exactly, in transit or temporary data vs long term stored (in place) data.

Could be worse. My bank doesn't even hash their passwords at all.

Hashed and salted means almost nothing. It could be one invocation of SHA-256 (or SHA512, or SHA-3) with a salt, which is fairly meaningless unless the key/password contains a large amount of entropy (almost no passwords do.)

It's much more important to know what algorithm they were actually using, e.g. scrypt, bcrypt or PBKDF2, and what the settings/work factors/iteration counts for them are/were. If a company is reluctant to disclose that, they're likely using something that's highly vulnerable to parallelization/efficient brute force cracking, regardless of what kind or size of salt was used.

I like how everybody is optimistic.

It could have been MD5!

Since, AFAIK, it's built on Rails, that's unlikely. However, it's possible that it's SHA-1 since popular Rails auth solutions used that by default at the time the company was founded (2007), assuming that's what they started with and never switched from.

I'd be very surprised if Rails didn't support upgrading the hash over time - anyone know if it does with has_secure_password?

Prior to 3.1, Rails didn't have built-in support for password hashing or user auth beyond http auth, so it was handled by libraries (like devise or authlogic), generated code (such as with restful-auth, which was popular in 2007), or custom code. I don't think it's very common for legacy codebases to be migrated to has_secure_password, but it's not impossible. There is not built in migration for has_secure_password, AFAIK.

LivingSocial is a big enough company I wouldn't be surprised if their authentication code was all or mostly custom.

We can expect 50% of the password hashes to be cracked in the first few minutes, 10% will never be cracked, and the 40% in between will survive with a roughly exponential decay.

I wouldn't be surprised if we eventually see a breach for a company where the attackers, after securing the DB of emails, goes on to email everyone informing them of the breach before the company does so and asking them to reset their passwords. This would give the attackers the old email of the person (which has likely been reused on other services) AND a new password that they also may already be using on some services or plan on using on other services in the future.

Why would the attacker, if they got the passwords, not bother to look to the column just to the right and take the salts as well? Given that, the attacker can try hundreds of billions of password combinations per second.

The point of unique salts is that even if all the user's passwords are the same, the end resulting hashes that are stored in the database, are different. So, let's say you wanted to hash your set of 50,000 different, commonly used passwords. You'd have to hash that list of 50k passwords for EACH individual user. So instead of 50,000 hashes... you have 50,000 x the total number of users.

I am not following your logic here.

What I am saying that if the attacker gets a row from the database that has the user's id, the password will be in a column labeled 'password'. Next column is likely to be the salt generated for that user's password, and let's say that this column name is 'salt'. With the salt and password for each user, you can run a dictionary across each combination at a furious rate.

The attacker does, of course, have the salts as well, since they're part of the hash. I don't see how they can try hundreds of billions of password combinations per second, though.


25 late 2012 vintage GPUs gets you 180 billion MD5 tries per second. With a fairly modest budget you can rent a whole lot more GPU power than that. A little Googling gets me several companies offering password hash cracking as a service.

> 25 late 2012 vintage GPUs gets you 180 billion MD5 tries per second. With a fairly modest budget you can rent a whole lot more GPU power than that. A little Googling gets me several companies offering password hash cracking as a service.

But we're assuming, at the very least, that they're not using MD5 and are instead using SHA-2/SHA-256 or (s|b)crypt.

There's an order of magnitude difference between brute forcing MD5 and brute forcing something better.


MD5 is 2-3x as fast as SHA-1

MD5 is 5-7x as fast as SHA-256

I don't know why anyone would assume they are using anything better than MD5/SHA-1 considering the history of incidents like this.

https://www.livingsocial.com/createpassword: "LivingSocial passwords were hashed with SHA1 using a random 40 byte salt."


Would encrypting the email addresses be feasible? They do seem to have a lot of value to hackers, but I'm not sure what the technical limitations would be for a user-base this large or with their functionality.

Encrypting would be pointless. Hashing would be possible, but then you break things like email notifications and password resets.

Upon a password reset submit, check if the hash(email_submit) exists, if so, email the password reset instructions to the email_submit address :)

Still, notifications would be toast.

Thought most email notifications are annoying, they can be useful (or crucial).

If you don't need email notifications, then it's not a problem.

One nice feature from gmail/yahoo/outlook/etc.: temporary email addresses that forward to your email. This would solve the problem.

Or, maybe alternative notification methods are preferred? If they wanted, a user can input an email address with a plus (if they're using gmail) and then receive notifications only on that. If the user wanted, then they could block all incoming messages that match that generated email.

Why is encrypting pointless if you generate the equivalent of a private key in code? Of course, if the attacker also gets the code, then you're toast, but getting the code is not guaranteed. So, seems like you'd have some additional protection with encryption.

it's well known that many users pick passwords like "password", "123456", "qwerty", etc...

so I don't think cracking many of those passwords will be a problem

"Ruby on Rails is the platform upon which LivingSocial runs." - http://en.wikipedia.org/wiki/LivingSocial

I'm just speculating but the first thing that ran through my head is 'this must be a Rails breach'.

I'm also guessing that a very large percentage of the 50 million users signed up like I did when Amazon had a deal (something like $20 gift card for $10).

LivingSocial have put up somewhat of a statement on their web site asking you to change your password:


""" LivingSocial recently experienced a cyber-attack on our computer systems that resulted in unauthorized access to some customer data from our servers. We are actively working with law enforcement to investigate this issue.

The database that stores customer credit card information was not affected or accessed.

Although your LivingSocial password would be difficult to decode, we want to take every precaution to ensure that your account is secure, so we are expiring your old password and requesting that you create a new one. """

Well, yes, you deserve what's coming to you if you run an unpatched system but that's a pretty baseless speculation.

I don't think he implied the system was unpatched, only that an issue with Rails may have been exploited to compromise the site. It is reasonable to posit there is an undisclosed security issue with Rails that is being exploited here.

This e-mail is important, so please read it to the end.

LivingSocial must have an interesting corporate culture if the subject header of "Security Incident" isn't enough for employees to actually read the email.

LivingSocial has thousands of employees, many in sales and other non-technical roles. That first sentence reminds them "this affects you, it's not just for the engineers." That seems prudent to me.

Released late in the day on a Friday to try to minimize the news cycle and guarantee fewer people will see it.

The body of an email I received 6:30 AM Eastern time:

from <updates@livingsocial.com> " IMPORTANT INFORMATION LivingSocial recently experienced a cyber-attack on our computer systems that resulted in unauthorized access to some customer data from our servers. We are actively working with law enforcement to investigate this issue.

The information accessed includes names, email addresses, date of birth for some users, and encrypted passwords -- technically ‘hashed’ and ‘salted’ passwords. We never store passwords in plain text.

The database that stores customer credit card information was not affected or accessed.

Although your LivingSocial password would be difficult to decode, we want to take every precaution to ensure that your account is secure, so we are expiring your old password and requesting that you create a new one.

For your security, please create a new password for your (removed my email address) account by following the instructions below. Visit https://www.livingsocial.com Click on the "Create New Password" button (top right corner of the homepage) Follow the steps to finish We also encourage you, for your own personal data security, to consider changing password(s) on any other sites on which you use the same or similar password(s).

The security of your information is our priority. We always strive to ensure the security of our customer information, and we are redoubling efforts to prevent any issues in the future.

If you have additional questions about this process, the "Create a New Password" button on LivingSocial.com will direct you to a page that has instructions on creating a new password and answers to frequently asked questions.

We are sorry this incident occurred, and we look forward to continuing to introduce you to new and exciting things to do in your community.

Sincerely, Tim O'Shaughnessy, CEO"

I was REALLY hoping there would be no links to livingsocial in that email and that there would just be instructions to enter it in yourself. Now a phisher can copy the entire email and have that link in the yellow portion send you to a phishing site.

It doesn't really matter. A phisher could write an entirely different message with a link in it just as easily.

Is it time yet that someone builds a cross-platform account management appliance? Currently, we seem to be stuck between things like Kerberos which are complex, and your framework's built-in account framework which often uses SHA-1/MD5 + salt and has no mechanism for upgrading to better alternatives.

While things like Persona are awesome, for those who insist on using passwords, why not have a standard "thing" that handles them? It should be able to switch passwords schemes on the fly (via re-encryption or double encryption), store data separately from your main DB, and be all kinds of paranoid.

Well, you end up with something like kerberos. Horrifying complexity rarely evolves in a vaccuum.

Every tried building an ecommerce platform? It's really funny. You start out as a fresh whippersnapper who is all like "MAN, stuff like spree is maddeningly complicated. We don't need something that complicated, we can just start real easy!" and fast forward three months and you've got a few white hairs, a deep hatred of paypal and something not dissimilar from what you wanted to avoid in the first place.

The better alternative would be to lobby all plugin vendors to switch to bcrypt/insert preferred alternative here.

Yeah, doing this with an HSM would allow you to keep plaintext passwords (avoid expensive algorithms like scrypt/bcrypt) encrypted under an in-HSM key. As long as the comparisons happened in the HSM, no one would be able to steal the passwords. You could throw other heuristics in to rate limit or whatever.

HSMs are pricey ($5-25k), but you could maybe lower the cost if you started using them in volume for web logins; no reason you couldn't do something protected against logical/remote attacks only for $250. Hardware attacks too, if you had high enough volume, for <$1k.

Wouldn't that mean a single point of failure?

If you put the whole read-only auth system on a single machine, yes, but there's no reason you couldn't have an arbitrary number of these boxes. You'd presumably put a few in each cluster of frontends at scale, not one per frontend.

Is their Github account a good indication of the state of their Rails setup? This appeared to be the only Rails-related gem they've open-sourced that's relatively well-followed:


The gemspec here: https://github.com/livingsocial/rails-googleapps-auth/blob/m...

		gem.add_runtime_dependency("actionpack", [">= 2.3.5"])
		gem.add_runtime_dependency("ruby-openid", ["= 2.1.8"])

		gem.add_development_dependency("activesupport", ["~> 3.0"])
		gem.add_development_dependency("tzinfo", [">= 0.3"])
		gem.add_development_dependency("actionpack", ["~> 3.0"])
		gem.add_development_dependency("activemodel", ["~> 3.0"])
		gem.add_development_dependency("railties", ["~> 3.0"])
		gem.add_development_dependency("rspec-rails", ["= 2.5.0"])

One of the recent critical vulnerabilities involved ActionPack, pre x.x.10:


The gemspec above specifies ActionPack 2.3.5 and above...theoretically, it's possible they upgraded their Rails installation without having to upgrade this particular gem...and perhaps they don't use this gem at all anymore (hasn't been updated in 8 months), so this is all speculative.

edit: Going to assume that LS at least protected from the Homakov-mass-assignment vulnerability, demonstrated in March 2012: https://github.com/rails/rails/issues/5228

> Is their Github account a good indication of the state of their Rails setup?

No. Source: former employee

LivingSocial has also made significant contributions to Resque, which I maintain.

So enough time has passed for the attack to reach the higherups, an internal email authored, that email leaked, the contents of the email confirmed by the company...and NO emails out to the users? What's the harm in making those two emails go out concurrently?

I'd want to prepare my support and sales staff before sending off an announcement to my users.

That's a fair response...now what's a reasonable amount of time for that to take place knowing that customers are learning about it from third parties?

If you goto the website they won't let you login now without reset your password. So essentially you only know there's a problem if you goto their website since they've yet to send an email.

But they had time to send me some great Mother's Day Deals half an hour ago!

  > But they had time to send me some great Mother's Day Deals half an hour ago!
... which their creative department had already developed based on a standard template and scheduled to have deployed today. Meanwhile, the security incident email had to be crafted and is probably winding its way through legal.

Looks like a neat way to ask you to go to website, which, as you wrote, forces you to reset the password. Maybe this is the way to clean up the mess and they'll only inform the users who do not click the offer ;)

I would hope by this point most of those daily deals emails are automated.

That was probably scheduled in advance...

Much of the issue with passwords could be avoided if the sites just sends you a randomly generated password in your introduction email. That way there is no chance of re-use and if the db gets compromised, just issue a new password.

At what point do we call collecting large numbers of user credentials in a central place that can be accessed worldwide a bad idea?

To me this is the problem P2P should be solving, not Facebook, Google or Mozilla

And there seems to be no way to delete a Living Social account (despite some language that says you can terminate the contract by canceling the account). Very 503 over there right now.

It is kind of weird, especially because there's not even an appropriate label in the dropdown menu for a help ticket. But they deleted mine 5 hours after sending it, so maybe they have a macro set up.

What are the implications if you logged in using Facebook connect?

Yeah, this is what I used.

Could this be another Linode that misleads the public as to what was actually hacked in an attempt to save their public image?

Could it be that you are actually a serial killer on the run from prison?

What's the point in engaging in such idle speculation? If you have actual information to discuss, great. Otherwise, it could just as well be that they are working on a comprehensive postmortem of exactly what happened with incredible amounts of detail.

I think that suspecting the worst has become a default position after so many appalling breaches (RockYou, Gawker, etc). After losing their 50M entry user database one would expect them to be a bit more forthcoming.

If ANYONE can figure out how to cancel a living social account, let me know. What a fucken crap site.

Only used LivingSocial once, thankfully I used my simpler "Untrusted" password, and not a more secure one.

You reuse your passwords? Tsk tsk tsk...

Yes, Mother, THIS is what I'm going to wear... (heh)

I'm pretty lazy when it comes to passwords for services that don't house any private information, and livingsocial's buggy UI design heavily influenced my decision to use paypal in lieu of assuming they were PCI compliant.

LivingSocial messed up a meal order of mine so badly one time, I had to pay for the meal twice and they still didn't refund my payment. Worst company I've ever dealt with, and I'm really not surprised to see they have a security breach considering the awful experience I had with them and how it seemed that they just didn't know what was going on.

So, should I change my email password as well?

If you use your email password for any other site, yes!

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact