Hacker News new | past | comments | ask | show | jobs | submit login
AdultFriendFinder was hacked (leakedsource.com)
283 points by xurukefi on Nov 13, 2016 | hide | past | web | favorite | 242 comments



Friendfinder and their brands are run by Andrew Conru. They're quite successful; they own Penthouse. At one point he tried to buy Playboy, but Hefner wouldn't sell. They don't really have 300,000,000 accounts; there's been litigation over their fake accounts. It's probably going to turn out to be like Ashley Madison, where over 95% of the female accounts were fake.

They had a breach last year, but it wasn't as big.[1]

[1] http://www.ibtimes.com/adult-friend-finder-dating-site-known...


They don't own Penthouse. That's one of the things that's so weird about all this. They sold Penthouse.com in February -- but then still managed to lose all of its login credentials in a database breach 8 months later...

https://it.slashdot.org/story/16/11/13/2144229/hack-exposes-...


They kept a backup?


Which is usually not allowed after acquisition, exactly for reasons like this.


I thought Conru owned both Ashley Madison and AFF? Didn't some AFF info leak during the Ashley Madison hack? Or am I not remembering that correctly?


Even if there are only 3 million real accounts that's still massively problematic for people who have one.


> where over 95% of the female accounts were fake

This is definitely not the case with AFF.


"the hashed passwords seem to have been changed to all lowercase before storage". I have no words to describe how idiotic this is. How do people come up with this and still get paid?


This attitude ignores the fact that risk comes in multiple forms. While lowercasing the passwords increases the guess-ability of the password when attempting to log in to this site it actually reduces the value of the password in a breach of this sort, since it may not be usable verbatim on other sites even if the user typed it the same way. As this sort of breach is now quite common I think many "best practices" for password security that date back to the design of unix logins are arguably no longer best. For example if we just used CRC24 to hash passwords it would be nearly impossible to recover them from the hashes, but at a practical level would be comparably secure on the front end: it would take 1000s of guesses to find a working collision, which is easily preventable at the front gate by locking the account after a small number of bad guesses; but it would be far more secure on the backend, since any CRC24 code could derive from millions of possibilities.


Wouldn't the leakage be catastrophically worse if storing CRC24 became commonplace?

If they leak, it becomes trivial to find working password synonyms to stick into the other sites you say benefit from discarding information.


Wouldn't each company salting on their front end prevent this? If you find a password synonym that works for a single company, it probably isn't the real password, so it would give you no information on a synonym for another company (unless it is the real password, in which case you can just use it directly, no need to look for another synonym).


You seem to be describing a non-standard way of salting hashes, and in the case of modifying the case, an extremely poor way of doing it. Please don't do that.


Not changing the salting. Changing the hashing algorithm.


This is intended as a thought experiment, not a concrete design for how to properly store passwords. In fact CRC24 doesn't really burn as much information as I assumed, e.g. on the top 100,000 passwords it only generates about 300 collisions (so over 99% still generate unique hashes). If one was really going to go in this direction a specialized hash that is deliberately collision prone across password-like strings is probably needed.


by that token I'd imagine ALLCAPSing would overlap many fewer passwords that have been reused for other sites.


If you know the password in allcaps, then you know it in lowercase, so why would that matter?


Assuming other sites DON'T do this it helps the security of other sites. But it hurts the security of your site.


Overall, yeah, it's not a good thing to do!


> While lowercasing the passwords increases the guess-ability of the password when attempting to log in to this site it actually reduces the value of the password in a breach of this sort

But why don't they use the proven common-sense strategy of not storing the passwords at all, but store the hashes instead? They can validate by converting user-input to a hash and then there is no harm even if the user auth table is stolen.


According to the article they were indeed hashes, but they were converted to lowercase _before_ hashing.

This effectively makes your password case insensitive and probably reduces the % of support tickets (some people might not just click a reset password link and will insist they were typing it right, so they will open a ticket - all because they forgot capslock). It reduces operating costs at the expense of lower security and somebody must have considered it to be worth it.


Read that again, slower.

> "the hashed passwords seem to have been changed to all lowercase before storage"


I agree that password-typo tolerance may seem like a horrible idea on the surface. The "str to lower" approach is an especially aggressive way to increase usability.

However, there's recent work [0] from Cornell that explores the security-usability tradeoff when correcting password typos. It turns out that accepting specific classes of typos (e.g., caps lock on: if password is "Password" then allow "pASSWORD") can increase usability with minimal security impact.

[0] https://www.cs.cornell.edu/~rahul/projects/pwtypos.html


Case insensitivity is pretty Anglocentric. Much more interesting would be keyboard layout insensitivity. As an Israeli developer, if my password is, say, asdf, then from a usability viewpoint it would do me wonders if שדגכ would also be accepted, as I'll be writing something in Hebrew, switch to a different page, click a link that brings me to a login page where my username is saved, whoops just entered my password on the Hebrew layout by accident.

Reporting caps lock usage and not also keyboard layout usage is a pretty bad usability hole IMO.


I've been out of the web game for a while - does the browser report what your keyboard layout is?


Browser language preferences, yes. Keyboard layout, no. For one thing, layout names vary between OSes, for another, custom keyboard layouts are a thing... However, you can try and read individual keypress events and see which printable characters are generated for which key codes.


FWIW on my iphone I use the English layout (qwertyuiop) for all roman-alphabet languages I use, since spelling correction language is connected to the declared "language" (a strange yet sensible overloading).


>if password is "Password" then allow "pASSWORD"

Facebook does this, in case you weren't aware.


I thought case insensitivity was only for the first character?


They do try changing the capitalization of only the first character, but also invert the capitalization of all the characters in the supplied password. http://www.zdnet.com/article/facebook-passwords-are-not-case...


eugh. really? If true, I'd imagine they do it to appease mobile users.


At Facebook's scale, I'd bet someone has research and evidence of 7 digit per year reductions in support costs by making capslock (or forgotten capitalisation) problems "go away"...


Why would you imagine that? In my experience, it's much harder to turn on caps lock by accident on mobile.


The first character is often auto-capitalized in many input fields by your mobile browser.

If a field is properly declared to be a password field (<input type="password" name="pwd">) of course ideally this wouldn't happen (plus, the characters get masked with stars, and hopefully what you type doesn't end up in your autocorrect dictionary, etc etc) - but it's full of shitty browsers out there.


GP was specifically referring to caps lock inverse, not initial character capitalization (which I agree is more likely to be the problem).


More like if your password has a caps in it and you forget it on mobile.


autocapitalize="none" on input elements works since iOS 5, and seems to work on modern Android based on a quick google search.


Too bad that web sites are very English centric. They ask for upper case, I enter Ö, they don't recognize it as upper case.


Yup. Also fun when trying to get paid from Google Play. "Please enter your name exactly like on your bank statement", then refuses since my bank statement name contains an ö...



That's nothing!

My name in Greek has a letter with a double accent. Perfectly valid of course. But many (Greek) sites will reply that this NOT a Greek letter :)


Or you could just show a message if the caps lock is on.


How do you detect if caps lock is on with JavaScript in the browser?


That's a good idea but it doesn't work for other common typos: wrong case of only the first character, an extraneous character at the end of the password, etc.


The problem is that it requires you to store multiple hashes while case normalization doesn't.

At least they do not strip special characters out of the password.


Actually, I don't think there's a need to store multiple hashes.

Here's one idea: Let's say the user's password is P. The user enters some password P' with a typo. The authentication check is "does H(T_k(P')) == H(P)" for some set of transformations {T_1, T_2, ..., T_n}. Each transformation T_i hypothesizes that the user made a specific mistake. (e.g., T_1 is the caps lock is on so we need to flip the case of all the characters)


That caps lock case is a good one, but I don't really see other realistic instances where you wouldn't just say "username or password is incorrect".


A few other cases: transcription errors (i.e., mistaking 1 for l), wrong case of the first character of the password, extraneous character at the end of a password, etc.

The paper I linked to actually does a good job motivating specific classes of typos by looking at real typos from Dropbox users.


No, you can just store the original hash and try two times when matching.


I guess this was done to allow inputting a slightly wrong password?

I seem to recall facebook allowed login for a pass "fooBar" with "FooBar" (phone input capitalize first letter) and "FOObAR" (caps lock pressed).

Still seems stupid to me, but if you care a lot more about letting people in than about their security it might make sense.


Chase's online banking login is case insensitive (for both username and password). Frightening that a bank cares less about security than letting people in.


I don't know how things are in the states, but a lot of banks in the UK ask you to input randomly selected characters from your password rather than asking for the whole thing. This suggests they're storing the passwords in the clear. The financial times wrote an article about it recently:

https://www.ft.com/content/33503e4a-8f95-11e6-a72e-b428cb934...


I'm sure that is also preventing lots of people from using randomly generated passwords.

I do, and it's not fun when I get asked to type in the 12th, 13th and 15th character.


They could hash all the possible triplets. This would be still ridiculously easy to crack, but it's not cleartext anymore so maybe it would pass the tick box security audit this way?


This is because a lot of banks Internet banking systems evolved from their phone banking systems. The random digits thing was so that someone couldn'the guess your code when you said it out loud over the phone.


Yes, when I wound up with Fidelity for a 401(k) I found their system was designed around being able to "enter your password on your phone" -- using the number keys, i.e. b3G -> 234. I can only hope that version is at least stored as a hash...


The worst feeling is getting halfway through keying out your password on the number keys when the Fidelity robot tells you that's not the right password. I just say operator now until they transfer me to a human.


I apparently am "good" enough to never have had that happen, though I don't call very often. That's horrific, since it implies that it is stored as plaintext, "optimistically" as "plain-number-text."


It's also horrific because you can use that to brute-force guess someone's password, because the robot will tell you when you have the wrong digit, so you can work your way through all 10 phone keys for each digit, noting each time whether the robot kicks you out, until you guess the entire thing.


To be clear, the input just times out, likely because they don't expect a long random password input. I don't think it's evidence that they can verify a substring of your password.


Agreed. Their rationale is that keyloggers can't as easily work and their chance of being hacked is lower than the odds of a user with malware keylogging :(


Probably true for the most part, given the nature of the user base. On the other hand, Tesco Bank got owned this week.


Card-readers and other two-factor authentication is a better defence against keyloggers.


A common interview question for penetration testers is how you might design password storage to allow this 'feature' without having to store the plaintext password. It's possible, I guarantee it.


That sounds interesting. Can you explain how to do it?



SSS doesn't look like it pertains to this discussion at all. How do you use this to store single digits securely?


If you have an method for requiring n out of m key parts to use the key, it is fairly easy to see how this can be used to require n out of m characters from a password. The only problem is that just a few characters doesn't have a large enough entropy to prevent the whole thing being brute forced very quickly.


> If you have an method for requiring n out of m key parts to use the key, it is fairly easy to see how this can be used to require n out of m characters from a password.

It doesn't sound very easy to me at all. Can you explain in more detail?


How? I would be very interested in a solution for that, past the obvious and insecure "hash each character separately".


I was in my bank branch recently and they printed off a statement while I was there. It was only when I got home I noticed the url was http rather than https. It was an intranet but even so.


It's possible the machine has some kind of VPN connection and this encryption is indeed end-to-end, just at another layer.

More probably: the branch itself has a hardware VPN, so compromising the local network is still possible.


So if someone breaks into one branch, they break into every branch?

Yay! Free bank accounts all around!


My bank in the UK has two separate things - a password, which I have to enter in the whole, and a "memorable word" which acts as you suggest. I've not seen a bank account which solely uses a memorable word.


Santander uses selected digits of PIN and selected characters of password, with a numeric user ID.

In fairness to them, they do use 2FA for anything involving moving money around.


Ouch, yeah, Santander. Worst bank in the UK. Everyone I know who uses them has had some really terrible experiences.


The last time I was made to change my Chase password, I made it maximal length at the time (32 characters). However, it turned out their login page had an off-by-one error in the login page Javascript, such that it wouldn't let you type 32 characters in the field. I worked around it via using the browser debugging tools to fix the bug, then decided on a 30-character password as an additional margin of safety. They said they'd look into it, never heard back about it.


You're lucky they didn't send a SWAT team to your door for "hacking."


Finding off-by-one errors in production code is a hobby of mine, but it's also pretty horrifying how often they're security-relevant.


I get the case insensitivity on the username though. You don't want grandma trying to remember if it was SweaterKnits@aol.com or sweaterKnits@aol.com as they resolve to the same thing. If not using the email, then he same thing applies, and I don't want the hassle of my tech support trying to resolve the difference over the phone.


In general Chase has a reputation for being much more careful and thoughtful about security than others (and in fact their password policy is somewhat better than some other financial institutions I've done business with). But still, I agree, that sucks.

e: There is a pretty good discussion of this here https://www.reddit.com/r/personalfinance/comments/2m81uj/tip...


This is common with banks due to legacy systems that don't have case sensitivity.

It's also not unheard of for them to strip all non alphanumeric characters so P@ssw0rd2016! is normalized to pssw0rd2016


Frightening that a bank cares less about security than letting people in.

On the other hand, they are probably far more alert to detecting and stopping bruteforcing attempts.

It's a similar situation with certain 4-digit PINs for smartcards; that may seem trivial to bruteforce, but you only get 3-5 tries before the system considers you to be attacking it and permanently locks you out even if you try to enter the correct one afterwards.


I don't think that's correct, the password field is case sensitive for me.


As mine is too. Just tried it and it said incorrect. Maybe it's some set of old legacy users? But that wouldn't make sense since I'm migrated in from WaMu


  Chase's online banking login is case insensitive ...
So is the case with Citi.


This is not true (at least for passwords on Android).


My bank used to do this as well.


You can do what Facebook does without storing the passwords as lower case. If someone tries to log in, and the password doesn't match, then just transform it that way and try again.


If you're saving the hash of original PassWORD, then transforming the erronously entered pASSword to lower case will still produce a different hash from tge one you have saved. It will only work if you save a hash of lower case password.


> If you're saving the hash of original PassWORD, then transforming the erronously entered pASSword to lower case will still produce a different hash from t[h]e one you have saved.

This is true, but it's not a response to your parent comment,

> You can do what Facebook does without storing the passwords as lower case. If someone tries to log in, and the password doesn't match, then just transform it that way and try again.

1. Store the hash of "PassWORD".

2. Receive erroneous "pASSword", hash it, find the hash isn't right.

3. Reverse the case of the bad input to get "PassWORD", hash that, find it matches.

At no point was it necessary to store the hash of "password".


Oh, you mean only the transformation of inverting the case. But I believe that originally we were talking about any mistakes with case, not only inversion.


You can also do O(2^n) brute-force like a dumbass.

Not sure if this is what GP meant. Maybe Facebook only accepts wrong case on the first position (simple to implement) or maybe GP just doesn't know what they do.


Why wouldn't they transform it before registration and login ? That way they don't have to check multiple passwords. I guess I'm missing smtg.



In reality a lot of people don't see any difference between "Password123", "password123" or "PASSWORD123". I was once talking to a bank and had to give char n of my password and I said something like "Uppercase E" and the rep on the other end of the phone actually scoffed at me "Capitals don't make any difference in passwords" and clearly thought I was an idiot.

I am in two minds wether a service used by non technical people should allow for case insensitive passwords, it'd be interesting to see what the difference in support load, customer satisfaction and churn would be for both case sensitive and case insensitive passwords, and also enforcing minimum complexity.


That bank sounds to have bigger issues: perhaps obvious, it's very likely that they were storing the password as plaintext -- since they were able to ask you for a specific character and verify that your answer was correct. Scary.

Have also never heard of a bank that actually asked for a password, substring or not, at least aside from being asked for a 'Phone PIN' or similar lesser/non-critical to authentication piece of information.


In my experience banks store pins in plaintext way to often.

The uppercase being indifferent is a first for me but I've had those people tell me that performing copy paste into the password input somehow changed the authentication procedure. He again acted as if I was a complete idiot for suggesting that that made no sense.


The point was more about the reaction of the rep, that it was ludicrous of me to consider passwords to be case sensitive, but yeah, not a good sign in general, though as I use a unique password for everything (1Password) I'm less concerned.

Related, the "3d secure" credit card verification system asks for individual chars of a password (at least in the UK).


If your bank is compromised to the point that your password - plaintext or otherwise - has been discovered, can things get much worse?


It's possible that they create the character entry when you set your password and the bank/operator only has access to that particular character in plain text.


All the other comments seem to be misinterpreting this as "the passwords were changed to lowercase before hashing and storage", which is entirely different.

Lowercasing after hashing increases the likelihood of a collision, but it won't necessarily have anything to do with the upper/lowercasing of the actual password.


It's a good idea. The amount of security gained by distinguishing between upper/lower case is minimal, particularly compared to the support cost. Increase the minimum password length by a character or two to compensate if you're really worried.


TDAmeritrade passwords aren't case sensitive. It's only money I guess.


Password rules...

* Must be 7-15 characters

* Must have at least one letter

* Must have at least one number

* No special characters

¯\_(ツ)_/¯


Saying that they don't require mixed case passwords to be chosen is not the same thing as saying they are "not case sensitive" (e.g. accepting "youare86ed" as valid for matching "YouAre86ed")


If they don't require uppercase characters in the password, it is not so idiotic... specially if they have rules that make up for it.


Charles Schwab surprisingly does this, too.


They were really bad at one point, and only used the first 8 chars. Might be more now iirc


To their credit, they do offer sending you a free security token.


Really? I haven't seen one and I've had six figures in brokerage/checking/savings for a decade. I know someone who worked at their datacenter a while ago and, well, heh.

Also, do they support e-statements for savings accounts yet? I swear it is the only piece of mail I get now a days.

Overall though they are a great bank with a magic fee-less debit card and human beings who answer the phone 24/7. And they don't seem too evil, but I haven't turned over many rocks.


Yep, if you go here:

http://www.schwab.com/public/schwab/nn/legal_compliance/schw...

and expand the "Be strategic with login credentials and passwords" section, it says:

> Consider getting a free security token, too, which can make every login even more secure. Just call us at 800-435-4000.


Chase bank does this.


[flagged]


The casual racism was superfluous, I think.


Not racism. At least learn the correct terms before slinging accusations.


If the notion that people from third-world countries can't be good developers isn't racism, pray enlighten me on what it is?


Discrimination based on place of living?


I don't think they're recruiting top talent...


Maybe they don't look out for talent on HN unlike Pornhub [0] :)

[0] https://news.ycombinator.com/item?id=12846537


There really is a shortage of quality developers. A bit off topic, but I believe that's why there are so many devs bemoaning the growth of the JS ecosystem...you may have to know actual Computer Science instead of just one tool.


Whatever issues people have with the js tooling ecosystem "Dammit. It requires me to have a computer science background" doesn't strike me as a common one.


I'm not sure I understood your comment. I agree there's a shortage of quality developers, but sometimes that's a result of an influx of newcomers. Maybe you can elaborate on this connection (or lack thereof) between CS and JS? I honestly couldn't tell from your comment if you were speaking in a positive or negative light.


I think an influx of newcomers might be a symptom of a shortage of quality developers rather than a cause. I suspect the cause would be more to do with an increase in demand in the job market and unfilled jobs, which encourages a lower bar when it comes to hiring.


Never miss a moment for a random Js snipe


How do you jump from a small available pool of quality developers to JS problems?!

I really don't like the JS ecosystem but this is totally unwarranted.


Eh, the JS ecosystem's growth certainly isn't something I'd categorize as "scientific."


How does this work? - The site lists 3.87 million Dutch speaking accounts. - Dutch is almost exclusively spoken in the Netherlands. - The total adult population (15-55) is 4.45 million (http://www.indexmundi.com/netherlands/demographics_profile.h...)

This would mean that 80% of the Dutch adult population has an Adult Friend Feinnder account!? (Of course people may have multiple accounts, but still, 80% is when taking into account the full (men+women) population.)


The first estimate I see of worldwide Dutch speakers is ~23 million[0]. There's over 5 million Dutch speakers (Flemish) in Belgium alone.

So you're looking at somewhere between 15-20% of Dutch speakers have accounts, which seems more reasonable, particularly if some people have more than one account (very likely, I'm guessing).

0. http://www.ucl.ac.uk/atlas/dutch/who.html


Simple. Most accounts are fake. The thing with AFF is that it's paying top dollars in affiliate programs. So everyone and their dog are building a fake profile to lure some naive guys into buying a subscription.


Probably 90% bots and throwaway accounts.


You need to double that, unless you exclude females. Why do you stop at 55?

"15-24 years: 12.11% (male 1,050,889/female 1,010,596) 25-54 years: 39.83% (male 3,400,998/female 3,377,311)"


Thanks for correcting that. Doubling checking halves the estimate to just over 40%. Still high, but more likely (given the already mentioned bots/spammers/double accounts etc). Indeed why stop at 55. Why shouldn't a pensionado be on a swinger site. Who am I to judge ;-)


When I moved to NL I was surprised to hear swinger/secret affair advertisements on the radio. I think a higher-than-average percentage of the Dutch population uses those sites compared to the US


I don't think it necessarily has something to do with there being more appetite for those sites here, but more with the prevalent hyperliberalism.

Personally, I find them disgusting and don't want them to be broadcast at daytime.


"but more with the prevalent hyperliberalism."

Would you expand on this?


Spammers?

Belgians?


I tried adult friend finder many years ago. It was nothing but Nigerian scammers. I doubt the majority of the profiles are real.


> How did it happen? They were hacked via a Local File Inclusion exploit and you can read more about the situation when it was initially reported from this link.

> LFI vulnerabilities allow an attacker to include files located elsewhere on the server into the output of a given application.

How did they do that ? append /../../../etc to an url that is supposed to serve a file and hope the server doesn't check for directory traversal ?


Local File Inclusion is when you have PHP code like

  include("some/path/" . $_GET['some_url_parameter']);
Adding &some_url_parametr=../../../etc/passwd (or ../../../var/uploads/evil_script.txt) allows you to insert arbitrary text file from the server into the generated HTML or execute arbitrary PHP code (which in turn can even run arbitrary shell commands if this is enabled on the server).

Since PHP has such feature, people use it and to this day you'll occasionally run into a website which employs this pattern. Common use case is

  bad-example.com/article.php?id=article_name.txt
where article.php contains headers, footers, formatting, etc and actual articles are stored in text files.


IMO, if you're checking the URL for directory traversal it's already too late. Whenever I build a server that serves files, I maintain a whitelist set of served files, and the first thing I do in the file request handler is check if the URL is in the set. If not, immediately drop to 404. There's too much that can go wrong with trying to sanitize inputs; it's better to rule out the possibility of unsanitized data by design. There's more than one approach to this, and none of them admit directory traversal.


How would you allow user uploads with that a whitelist? Accepted uploads get automatically added to your whitelist? That sounds problematic.


Never never never NEVER NEVER NEVER use user-input as any part of a path!

You should generate UUIDs, bucket by the first few chars, and use a database to map from UUID to human-readable name.


Exactly. The set of acceptable files can be modified at runtime. Now you've localized the issue of sanitizing paths to a small area of your code (file upload) rather than every request. A good way to do this is to save the file on disc with the hex encoding of its SHA256 hash as its name, and then maintain a mapping from file names to hashes. This way, the only feasible attack is to overwrite a preexisting file, which would require the ability to pull off a second-preimage attack on SHA256, which is not generally thought to be feasible.


Problematic how? It doesn't mean the whitelist has to be in the code, it could be 'generated' from a database (let whitelist be the result of select file_name from uploaded_files_table)


Then file_name needs to be reasonable and not arbitrary, right?

I don't care where you push your checks around to, just as long as they exist.


A hex-encoded file hash as a file name is a safe bet. You can resolve file names (unsanitized, stored safely in a database) to hashes and load the files from disk.

This general approach (whitelisting file URLs) lets us localize any path sanitation to the file upload code, rather than every single request.


That's one possibility. Another common flaw is upload/download features, where you can get directory traversal (../) in the upload or download file name that you are specifying.

When you've got file read, procfs is very nice :)


That's it exactly. For example, Upload a jpg that is actually code and then call that jpg through the exploit.


lolphp


EVERYTHING done online could be public someday. Act like it.


I feel this is a defeatist stance to take; LFI's are a solved problem and we should be looking to how and why this happened and prevent it in the future.

Another angle: we're supposed to not do anything that requires any form of confidentiality online? can't book a doctors appointment, transfer money, send emails to family?


It's not defeatist, it's personal hygiene. Sure there are some convenience trade offs and edge cases, but things like Facebook, Dropbox, etc can almost assuredly be treated as "eventually public" no matter how many buttons and knobs they add. The sooner people realize it, the better.


> Sure there are some convenience trade offs and edge cases

Some convenience trade offs? You're suggesting that people don't use any modern bank, don't use any hospital, don't interact with any state body at all. Should we go live in a cabin in the woods?


It seems to me that what they're actually saying is to tread carefully when it comes to social media and cloud services.


What about people who want to communicate and socialize but want some privacy?


Go for a hike in the woods with them


Signal.


The reason you can transfer money online is because banks and payment providers are insured, so when things go wrong you can (usually) get your money back.

Online banking doesn't implement security features to make things safe; it's to make the insurance cheaper.


Yes, and the insurance is cheaper because the probability of things going wrong is lower. Transitively, they are doing it to make things safe.


It's a realistic stance. What's possible or achievable is not what's commonly done. Even companies that know better or have staff often don't care enough to apply it. Malware with keyloggers and search functions hit computers regularly. The expectation should be, "If it's connected to Internet, treat it like it's public."

It's why quite a few organizations still used air gapped systems, link/IP encryption between locations, and private, leased lines. A smaller number use more secure or just obscure endpoints that can't execute the programs malware authors write. You don't read about such people in the news getting hit by malware or hackers. They can be hit, esp by high-strength attackers, but it's just rare because they don't trust the Internet, Windows, etc in how they do IT.


I think I mostly agree with what you're saying. I'm under no illusion that things can actually be secured; I don't own a single device that I'm not even slightly suspicious is running malicious code or otherwise leaking information I'm not aware of. Whether by negligence or purely the asynchronous nature of attack.

But is it productive for us to declare everything unsafe and somewhat give up believing we can build and use safe platforms?

I think there's a balance somewhere. I (to give a rather crude example, apologies) would never take a nude photograph of my partner on a digital camera because it's an unacceptable risk. But I'll share fairly personal thoughts knowing that they may come back to embarass me one day.

I want to put the pressure on companies who make such bold claims about their "military grade encryption" to face bankruptcy and shame when it's proven they're full of lies and negligence; if we just assume everything can be attacked with an LFI, it seems we've stopped caring about trying.

Caveat: I still haven't had breakfast, I don't have my best thinking cap on right now, but that's my gut instinct.


"But is it productive for us to declare everything unsafe and somewhat give up believing we can build and use safe platforms?"

There's the problem: we. We might be able to do it with time and money. Most startups, publicly-traded companies, regular companies, government groups (esp w/ legacy systems), and IOT makers aiming for max cost-cutting won't do it. Most don't know how but won't make the sacrifices even if they learn. Their incentives plus demand-side tell them not to. So, no reason to think they'll do it any time soon past marginal improvements for public relations.

What can happen is people forming organizations idealogically and/or by charter committed to puting quality/security over highest-margins in their products or services. Look up Praxis Correct-by-Construction for an example who charges 50% premium for software they warranty for quality. Secure64 sells DNS with ultra-hardened OS. GENU builds on OpenBSD. Green Hills has INTEGRITY-178B. OK Labs (now GD) put microvisor in a billion phones. There's some others but really niche and still successful where well-marketed.

We could see more of that. Only problem is they fight an uphill battle since they're expected to include a pile of insecure features and protocols in lots of products. And, despite maximum quality, at same price or cheaper than competition! What could go wrong in such an IT market?!


This is such a statement of failure of the IT industry as a whole.

And I think it's right. None of the large OS are fit for purpose. And it's about time that regulators start protecting unsuspecting consumers from unscrupulous incompetent fly-by-night developpers like the ones behind this website.


I wrote a quick rundown of it on Schneier's blog discussing liability:

https://www.schneier.com/blog/archives/2011/09/an_interestin...

Idea being we just combine the right features, often standardized in libraries, with the most cost-effective of assurance activities proven to work. I gave a list of the latter to pick and choose from in another discussion:

https://news.ycombinator.com/item?id=10734477

Cleanroom methodology with safer languages with test-case generation, battle-tested libraries for common risk areas (esp web attacks or crypto), automation of parsing/protocol handling, static analysis to eliminate common issues, and basic code review would knock out vast majority of code-injections.


This is a dark thought, but your country may one day slip into becoming a totalitarian state, where the data of private companies is appropriated by the powers that be.


> we're supposed to not do anything that requires any form of confidentiality online? can't book a doctors appointment, transfer money, send emails to family?

You probably won't want to do any of this stuff outdoors in 10 more years, or indoors around a smartphone, television set, refrigerator, or any object with flashing LEDs on it. As a combined network, smart audio coverage can be made to be so complete that incomplete areas will arouse suspicion.

Registering on (or even visiting) an internet site is just slightly different than signing a contract (and in some opinions, legally equivalent.) If you had to sign a form with your address to visit a prostitute, and you couldn't even open the door without it recording your license plate number - what kind of expectation of privacy could you reasonably have?


Read Quinn Norton's "Hello Future Pastebin Readers!"

https://medium.com/message/hello-future-pastebin-readers-39d...

Norton's law: Over time, all data approaches deleted, or public.


No. This is a stance of totalitarian regimes. I doubt the people that joined this site should be shamed for their choices.


That is a weird opposition. It's very true that everything could be public. I don't see OP as shaming any of these people, just pointing out that you have to be safe about what you release to the Internet.


No, get people to make safe and proper websites.

The internet is becoming too important to our lives, we can't just say "presume everything is public"

Your advice means that someone should refuse to visit a doctor or hospital which uses computers, since "I have to act like everything online could be public!". That's just unworkable.


> No, get people to make safe and proper websites.

Not gonna happen. That's like saying companies should make un-breakable security for buildings. No such thing will every exist.


> Friend Finder Network Inc is a company that operates a wide range of 18+ services and was hacked in October of 2016 for over 400 million accounts representing 20 years of customer data which makes it by far the largest breach we have ever seen

They didn't see the Yahoo break with 500m accounts?

Also, why is "pakistan" such a popular password? Deployed soldiers?


As a pakistani, that cracked me up. We are at the top of the list of porn searching countries, I think ( http://tribune.com.pk/story/823696/pakistan-tops-list-of-mos... ) and porn sites often have AdultFriendFinder ads, so it is possible that a pretty large number of pakistani people signed up. (Assuming there's a free sign up)


Some bot's default?


Because they hate India


So I have always wondered this, but what is the most common way to realize that your data was hacked? Is it from very careful monitoring of connection logs? Do hackers typically leave notes and/or obvious traces? Do you start to notice your stored information online (possibly for sale) in sketchy places? Do specifically your customers start getting spam?


If you're being proactive about it, one approach is to create "canary" accounts: single purpose email addresses that signup for your service and nothing else. When those email addresses start getting spam, it's a strong indicator your database has been accessed.

Many users signup for each online service with a single-purpose email address. e.g. <servicename>@uniquedomain.com, so many customers will often know of a leak as soon as the service provider does.


I hadn't heard of the canary account approach. Nice idea!

As for single-purpose email addresses, that only works for cases where the service isn't selling account information, correct?


>As for single-purpose email addresses, that only works for cases where the service isn't selling account information, correct?

I don't know how you would tell the difference in that case so I assume yes.

The way I implement this is I bought an entire domain for spam. I created a catchall account and when I sign up for services I can just punch in hackernews@spam.com for example. All of this filters into a single email account allowing me to retrieve all my password resets and account confirmations.

This will weird out some people over the phone:

"Yes, it's comcast@spam.com"

"Sir, to look up your account I need YOUR email address"


I do something very similar. I've had a hotel clerk ask me if I worked for them when I told them my name was <hotelchain>@<mydomain>.org


You can also do this with Gmail aliases[0]: "For example, messages sent to jane.doe+notes@gmail.com are delivered to jane.doe@gmail.com." Although the number of sites with incorrect email validation (that reject perfectly valid email addresses) is shocking. I do this, and you can then just block the alias if it starts recieving spam.

[0] https://support.google.com/mail/answer/12096?hl=en


With fastmail you can also use subdomain addressing [0] for those broken sites and to prevent people from filtering (\+[^@]+) to get unmarked addresses. user@sub.domain.tld is handled the same way as user+sub@domain.tld

[0] https://www.fastmail.com/help/receive/addressing.html


While it may have worked in the past (for a while anyway), what exactly prevents spammers from stripping the suffix, given that the functionality has been public knowledge for many years? Best case they'll be lazy and try both with and without, and you'll end up knowing. Blocking the alias cannot possibly have any effect.


> Blocking the alias cannot possibly have any effect.

Ah, but it does work (for me, twice). Of course spammers could strip the suffix. But since spam is a numbers game, I'm not sure it's worth the effort for them.


At the last company I worked for, we discovered an intrusion when we started getting a ridiculous number of credit card fraud complaints. It should be noted that we sold scientific instrumentation to other small companies and rural markets so it was pretty easy for them to figure where their info got stolen from when they only used their cards for infrequent transactions.


AdultFriendFinder.com

103,070,536 passwords already plainly visible

232,137,460 passwords hashed with SHA1

99.3% of all passwords from this website are now plaintext (cracked).

As someone who cares about security, this is very, very painful to read. But it also makes me curious about that password data set. It might be used for security research, like estimating the entropy of passwords more accurately.


I'm shocked that developers of such sensitive website would do this. Were the owners cheap and hired some offshore team for pennies?


You can always assume this.


It definitely shows how terrible people are at password generation and reuse but even more so how little it matters on individual sites if those folks have no understanding or don't care about protecting passwords. Yet people keep using 123456 as a password.


Lots of bots and throwaway on AdultFriendFinder. It's normal to have many users using "123456" and the likes, they are not real users.


I use silly passwords at sites where I don't care about security and don't want to be correlated with my other accounts elsewhere. Does this mean I'm bad at password generation and reuse? ;)


> if those folks have no understanding or don't care about protecting passwords

I suspect that or is an and in a lot of cases as well.


I often store my password using PHP's password_hash('password', PASSWORD_DEFAULT) function. This function has been baked into the language since version 5.0 I think. I'm sure most other languages must have a similar function too, yet so many sites save the password in plain text. Doesn't make any sense.


Major props to Anthony https://github.com/ircmaxell for adding this as a language supported feature to PHP as well for his work on techniques for preventing injection.

I work with C#, Java, Python Go and JS on backends a lot and no other language I worked with had such a simple but secure API.


Not a std lib in Python, but Django has nice API as well for saving the password [0].

    from django.contrib.auth.models import User
    u = User.objects.get(username='john')
    u.set_password('new password')
    u.save()

And here is the code which does all the magic - [1]. You can also generate nice passwords [2], use many available different hashers [3] Or write your own [4]

[0] - https://docs.djangoproject.com/en/dev/topics/auth/default/#c...

[1] - https://github.com/django/django/blob/stable/1.10.x/django/c... and https://github.com/django/django/blob/stable/1.10.x/django/c...

[2] - https://docs.djangoproject.com/en/dev/topics/auth/customizin...

[3] - https://docs.djangoproject.com/en/dev/topics/auth/passwords/...

[4] - https://docs.djangoproject.com/en/dev/topics/auth/passwords/...


One really nice feature that Django has that is rare and well done is the password upgrading workflow. Not only do they let your app support multiple algorithms at the same time (with one preferred), they also let you chain algorithms during upgrade [0], so if you have a legacy database with all SHA1 passwords, you can upgrade all of them to PBKDF2. At first these will all be PBKDF2(SHA1(pw)), and they will get migrated to just PBKDF2(pw) as users log in, if you set PBKDF2 to your preferred algo.

Note that of course the password algorithms are typed, so this doesn't cause a problem in the corner case that a user's password is a sha1 hash of something else.

[0] - https://docs.djangoproject.com/en/dev/topics/auth/passwords/...


Now how did I not know about `make_random_password`? Solid tip, thanks!


I can't speak for all of those languages, but this functionality is often provided at the web framework level in Python and it fits quite nicely there. Since your web framework typically also knows where you are storing your passwords, you can do nice things like increase the number of bcrypt rounds in a settings file and have users transparently migrated as they login which I'd assume doesn't really work at the language level.

Still, a pragmatic answer and, given PHP started life as a web framework, fitting :).


Go seems pretty nice:

  func GenerateFromPassword(password []byte, cost int) ([]byte, error)
  func CompareHashAndPassword(hashedPassword, password []byte) error
[from golang.org/x/crypto/bcrypt]


v5.5 actually, but there is a userland library to provide the same functionality - https://github.com/ircmaxell/password_compat


well on java it's at least:

    PBEKeySpec spec = new PBEKeySpec(password.toCharArray(), salt.getBytes(StandardCharsets.UTF_8), iterations, digestSize)
    SecretKeyFactory skf = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256")
    byte[] hash = skf.generateSecret(spec).getEncoded()
ant then using MessageDigest.isEqual (on newer jvm, older ones had a bug up to 6 45 or so) to compare the passwords.

well the biggest problem is probably generating a truly random salt with SecureRandom, which will slow down your program if used incorrect.


I would be interested to see if it is possible to work out what percentage of the profiles are fake/bots from the data leaked. Is that possible or would they simply blend in too easily?


It would probably be difficult to prove with certainty, but depending on what the passwords are, you could potentially be able to do something like that. For example, if there are enough accounts that have the same password (which is also relatively unique), then at some point it will be a statistical impossibility that they were all created by different people.


My first thought was "again"? This just happened.

Yes. Again:

This event also marks the second time Friend Finder has been breached in two years, the first being around May of 2015.

Data are liability.


17 fuckyou 34,498. What a strange password choice.

The interesting thing to me is that password choices clearly reflect the demographic of the users.


I think that is a response based upon the dark UI pattern of the site.

If you want to view a profile, they force you to register.

Hence the user clicks on a profile, gets a registration form, and fills it out in a bad mood, since they are being forced register to continue when they don't want to. Hence "fuckyou" or "fuckoff" becoming their password choice.

It would be interesting to see what email addresses these specific users gave. Possibly throwaways that use equally fruity names?


Cards should be easy to pull.

Things are tightening up.


tl;dr: Last months AdultFriendFinder.com, Cams.com, Penthouse.com, Stripshow.com, iCams.com databases in a "statistics" advertisement for leakedsource.com's services.


If they don't want it to be mineable, why not a search feature that emails results to the email in question?


I'm guessing they got an exclusive on that one. Want to ramp up the PR machine before delivering the goods. They'll drop it when everyone's excited enough. I doubt they care about privacy, the whole point of their service is/was not caring about it (as opposed to haveibeenpwned).


They say the hashes were peppered. What does that mean? If it's similar to a unique salt per user, I find it hard to believe they could crack that many very strong looking passwords.


All peppering does is make it trivially more difficult to check for duplicate passwords. A system with a decent amount of GPU power can try passwords against SHA-1 at billions of attempts per second.


What does peppering mean though? I don't even know the definition.

Per-user unique salts are definitely helpful in leaks like this. With 400,000,000 users, it would take 400,000,000x more compute power to crack the same number of passwords.


Not really, because the exponential scaling with strength of password dominates the sub-linear scaling in quantity of passwords.

The passwords in the dataset will neatly divide into "trivial" and "intractable".

A single password with 80 bits of entropy (16 characters, random lowercase/numbers) will take more time to crack than 1,000,000,000 strong human-chosen passwords under 40 bits.

Most of the passwords will be so weak that it might not be worth doing the sorting and preprocessing needed for the parallel attack on multiple passwords with the same salt.

Once you're using just plain hashing you've already lost and instead of using ad-hoc salting schemes you should be using a proper PBKDF (PBKDF2, bcrypt, whatever)


Going to take it from the top. Skip down for the actual pepper information if you're already familiar with hashing and salting (I assume most people will be).

Let's assume you want to store a password. The first, obvious step, is to store it in plain text. This is obviously brain dead, but well, we live in the world we live in.

    $user_pw = $password;
The second step is to hash it. This means that the password can't just be read out of the database.

    $user_pw = hash($password);
The problem with this approach is that with the amount of computing available, it's fairly trivial to just bruteforce everything, and with the advent of rainbow tables (pre-cracked hashes), it gets even easier.

The next obvious step is to salt the password. Salting means that you add a random piece of information to what you hash, in order to disable the use of rainbow tables. Every password has to be cracked individually. The salt needs to be included in the stored form of the hash, because otherwise you can't calculate incoming authentication requests against it.

    $user_pw = concat($salt, ':', hash(concat($salt, password)));
This makes a targeted attack possible, but mass attack over a long list of passwords gets quite a bit more difficult.

The problem is that now, you have the salt always stored with the password. This means that if your database gets stolen/dumped, an attacker has all the information required to crack specific hashes.

In order to alleviate this, you can use a pepper, which is similar to a salt, except that it is global and unique to your application, and doesn't change all the time. It is a static piece of data that gets hashed as well, but isn't stored alongside the hashes in the database.

    $user_pw = concat($salt, ':', hash(concat($salt, $PEPPER, password)));
This obviously only changes anything if your pepper doesn't get stolen alongside the database, so this is usually an application-specific constant that doesn't get stored in the database.


I've never heard of this before (at least not called "pepper") - I think I've heard similar things called stuff like 'sidewide salt' and 'per-password salt'.

"peppering" as described is conceptually similar to storing passwords as HMAC's under some key not stored in the database.


I'm amused to see "ifyourreadingthisitstoolate" among the long passwords. Quite!


Could just be a reference to the Drake album


Is there a torrent or something of the database that is not hidden behind a paywall?


Why are they not making the data searchable?

I don't see how that helps anyone when a technical person can trivially setup a search, and a non-tech person could pay someone a small sum to do the same.


If we could use a different identifier (like email address) for every website such hack would not be a problem. Or if we used a hardware key without email address.


I wonder how much Adobe and Dropbox data from their breaches overlap with AFF data? What a Klondike for a sociologist.


> If Twitter decides to ban them [their new @BigSecurityNews account] as well, we are going to start giving exclusive content to the terrorist group ISIS so they too get banned from Twitter because it seems like that's what it'll take to get Twitter to take action against accounts of those who enjoy cutting the heads off their enemies.

Savage. It's interesting why twitter seems to be blind against obvious terrorists accounts.


Is it possible for such a leak on Google searches? I.e. massive leak of accounts to search data, or 3rd party leaks?


Of course it's possible, the question is how likely it is.


can we please not refer to a kid who exploits a 10-year-old known vulnerability as a "researcher".


Your ego is showing.

Just because the vulnerability is old doesn't mean you can disrespect his profession. Sometimes people just write bad code. I'm sure you've done the same.


I've definitely written bad code, for sure. I don't mean to sound disrespectful, I chose my words pretty poorly there.

I'm arguing that this isn't research. There was no novel technique used or investigation into original paths of exploitation or increasing our understanding of previously unknown areas of anything.

There's a material reason I said the above; calling the individual a researcher suggests that the exploitation of AFF was something quite complicated requiring a previously unknown attack, taking away from the fact that having an LFI in your app in 2016 is potentially bad luck but more likely just negligent; it should be highlighted as such.


In the sheepish confirmation it'll be "a sophisticated and meticulously executed attack by what can only be a state-sponsored hacking group"


Hum. Your quote is a line from the last Mirai DDoS.

And the next week, a later article corrected to "a script kiddie from HackForums" as determined by the FBI :D


How do you know he/she is a kid?


How they crack the hashed passwords?


top 1000 passwords list + brute force + rainbow table + dictionary attack + fingerprinting attack.

All of that running on GPU. It's terribly effective. Even more so when 90% of accounts are throwaway/bots/fake accounts.

I'd make a blog post about cracking 99.7% of AdultFriendFinder passwords in 1 hour. But then I realized that it's evil and I shall not.


And what do they hit to make sure the password was cracked? Do they make bots to login? I don't understand.


SHA1 is a hashing algorithm (as opposed to an encryption algorithm), this means the string you're trying to hash will always have the same result. As an example, the string "password" will always have the same SHA1 hash (5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8). If you have the list of hashes, you can always find a lot of the passwords by using the techniques explained above.


Oh ok, I guess this doesn't happen with bcrypt, right? Because it spits out a different hash every time you hash it.


Uh? Of course not. It will spit out exactly the same hash every time, given the same input.

How else would you verify that the password matches ?


Try it, it doesn't. The Bcrypt algorithm generates salts and the salt is built into the outputted hash.


I don't know. Laravel uses a hash algorithm that uses bcrypt and never outputs the same hash.


Bcrypt uses 22 character salt.

The output of your library's BcryptEncoder.encode(password) includes not only the password hash but information about the algorithm and the salt. That's what you store in your database. That extra information tells the decode function how to decode later on.

See here:

http://stackoverflow.com/questions/6832445/how-can-bcrypt-ha...



The passwords shown were cracked with dictionary attacks. Rainbow tables are difficult to use in situations other than cracking up to a particular length using a specific character set, and even for that have little, if any speed advantage over modern GPUs.


Probably SHA-1 Rainbow tables.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: