I recall they also spoke on some security aspects of the system's design, like how the cracked passwords never touched disk and had to be destroyed as soon as possible, etc.
I wish I could find a recording or a writeup on this somewhere, as I thought it was a pretty cool (and effective) approach.
Anyway, the format of our passwords was quite strict. I don't remember the exact rules, but it required "special" characters and lower/upper case letters and numbers and a minimum length etc. So what I did was write a system to scan all outgoing email. It would search that email for all strings which matched our password pattern. It would then attempt to authenticate against the Active Directory with each of those strings. If any succeeded, it would block the email, and the person sending the email would get a response telling them to not send their password via email. I would also be Cc'd in, so I could keep an eye on it.
This stopped several people a week from emailing out their login details to a phisher. I wrote about it here: https://www.grepular.com/Mitigating_Spear_Phishing - With links to the source code. I also later came up with a simpler solution and wrote about it here: https://www.grepular.com/Defending_Against_Spear_Phishing_wi...
I was reading about something similar for CI servers to compare the stdout/err to the values of all initialized secrets and if any strings match, filter them out. It's a simple way to block "echo $SECRET_STUFF" from publishing to a public log. It doesn't catch everything as it'd still be possible to curl out to transmit information, but it does work quite well for the more common case of "*Let me dump all environment cars to debug why this isn't working...".
Did you consider regularly sending phishing emails yourself, and automatically call out anyone who reply anything at all to them? I mean, you won't catch quite as many people, but eventually most will learn, one would think.
With a large batch of new incoming students every year this problem would never go away.
In the mid 1990s, I was the computer security officer for the 81st Medical Group in the USAF, which is the proper name for a rather large DOD hospital in southern Mississippi.
Though it was 22 years ago, the hospital was almost completely paperless. Every member of the staff, from doctors to orderlies, used one of the 10,000 or so VT320 terminals spread across a dozen or so building in the campus and beyond. Needless to say, on an average day, a person would enter their userid and password many times. Many of those accounts were very powerful, because we were networked with the rest of the DOD's medical records. One example report I ran with a doctor's account credentials was 'List everyone in the DOD, past or present, who is or was HIV positive.' ('Was' because the person could be dead.)
Furthermore, this entire system was reachable via the Internet, via AFIN (Air Force Information Network).
This probably strikes anyone reading this as...kind of nuts. And by today's standards it certainly in. But 22 years ago, most people weren't thinking in those terms.
The implications did freak me out a bit, once I took the job, and though I didn't have the power to do much about it structurally, I could do some things to improve password security.
So I had a dedicated (dating myself here) Pentium Pro Linux server that did nothing but run password attacks on our entire authentication database. On top of that was some automation I wrote that, once an account's password was guessed, would send automated e-mails, daily, to the account holder and their manager.
If the password wasn't fixed in a week, then their account would be automatically expired, forcing them to pick a new password.
The system didn't stop them from picking the same one as before, which people frequently did, but the automation was smart enough to expire their password again the next day without the grace period if that was done, which was annoying enough to get people to stop that practice.
This was rather...unpopular...among the staff. But I had that little 'HIV Positive Report' presentation I mentioned before. I said the account I ran that report from was behind the password '1234', and that anyone in the world could have logged in, run the report, and published the results. The thought of that spooked even the most technically and security clueless medical types.
Scare tactics? Yup. But sometimes scare tactics are justified.
Have you considered writing this up as a full post somewhere?
Where do you think should I post it?
Based on the background you described, I'm sure that's not the only story you have.
It'd also make a great lightning talk for a conference.
Uh, um, wow. That's a pretty serious abuse of privilege you're admitting to, even by 1990s standards. I'd lawyer up if I were you. Shit-storm incoming...
Changing cracked passwords was compulsory, you got an email that just said (paraphrasing) "Your password is crap" and then were forced to change it at next login.
It was quite amusing, and instructive for the first year students. I seem to remember it happening to maybe 15 to 20% of people when I started, even though everyone was warned repeatedly. And this was with hardware and hashes from 15 years ago. Many (most?!) websites were still storing passwords in plain-text and didn't use TLS for log-in forms in those days!
On the other hand of you are not part of the security team something like this can get you in some real trouble. Don't do it at home kids!
And in fact, four random words is actually quite strong. The XKCD comic that password is taken from accounts for the use of word lists in its entropy calculation. In fact, it even _assumes_ the attacker knows the exact 2048-word dictionary you're selecting the words from. Even under those assumptions, four random words is _still_ a pretty strong password.
Any common password pattern you could catch via brute force could also be detected via zxcvbn, except that zxcvbn would be much faster and more efficient at it.
Might have been improved by now; not sure. If it's not you might be wasting electricity another way ;-).
Not to mention producing more spam for the rest of us.
For that reason I've been trying out a password-less login for a while now (works via email) and so far non tech folks haven't complained too.
It is pretty much as though you always used the "forgot password" mechanism to login.
Wrote about it here - http://sriku.org/blog/2017/04/29/forget-password/
Plus by merging all of the log-in paths (registration, 'forgot password', and normal login), you have one thing to design and secure rather than three. That seems like a huge advantage from a security perspective.
"In this paper, we study the existence of multicollisions in iterated hash functions. We show that finding multicollisions, i.e. r-tuples of messages that all hash to the same value, is not much harder than finding ordinary collisions, i.e. pairs of messages, even for extremely large values of r. More precisely, the ratio of the complexities of the attacks is approximately equal to the logarithm of r. Then, using large multicollisions as a tool, we solve a long standing open problem and prove that concatenating the results of several iterated hash functions in order to build a larger one does not yield a secure construction. We also discuss the potential impact of our attack on several published schemes. Quite surprisingly, for subtle reasons, the schemes we study happen to be immune to our attack."
If you want to force the attack to go through two hashes, you'd use functions sequentially (e.g. F(G(input)), not F(input)+G(input)), which the paper at least initially doesn't talk about. I didn't read the whole thing, mind you...
Note that password-stretchers like bcrypt/PBkdf2 use only fairly small extensions to this idea, so clearly in general the construction isn't known to be flawed.
The paper is a bit above my level of understanding and I tried making sense of how the cryptanalysis is done to no avail.
There's a risk, depending on your usecase and traffic levels that if you crank work factors too high, you can impact the users perception of your performance (e.g. a login operation might appear slow)
Once the attacker figures out what the 2 hashing algorithms are, the scenario basically becomes the same as cracking hashes of 1 algorithm of increased difficulty (through number of passes)
So, like the other answer implied, the increased complexity of maintaining 2 algorithms might not be worth the obscurity trade-off in the end.
However, I am not a security professional either, so perhaps my opinion is not comprehensive enough.
I raised an eyebrow at the hash/salt table alone.
10 possibilities in position 1
10 possibilities in position 2
10 possibilities in position 8
10 * 10 * 10 * 10 * 10 * 10 * 10 * 10
Alternatively one can even simply observe that 99999999 is the highest number possible and since 00000000 is possible also then we have 99999999 + 1 different possibilities = 100000000 = 10^8
Delegate to someone else™ isn't always the answer to your security problems. It only adds more complexity, no more or no less security.
1) Decrease complexity, and
2) Add more security.
This is supposing I am not a security expert and that Amazon has a good implementation.
Of course, we all have libraries to use etc.
It's still a pretty good option.
But those are bad comparisons. A key and lock is an asynchronous single use authentication+authorization mechanism. Passwords are just the authentication part, so trying to replace these just requires we have a secure way to authenticate ourselves.
We have the benefit that we are using digital systems, so our authentication can be digital, too. We can also rely on multiple factors to improve how authentic this process is. Biometrics, digital files, access to other accounts and networks, offline code generators, and personal information all provide lots of authentication data and multiply the effort needed to defeat the system. By combining all these factors, we can create a new digital key that is far more difficult to defeat than old methods by themselves, and ultimately is more flexible because it can be made up of any of these things.
The problem mainly seems to be that we live in a world of different locks, and most locks don't accept this particular kind of digital key. We've hacked around this problem and made some attempts at more compatible solutions, but they really fall short of their true potential.
In the future, you should simply be able to use any system and know that it will authenticate you in a way that can't be copied or cracked. Today that just isn't the case. So for now, maybe we should move the goal posts. We can keep making our keys more unwieldy, but we can also get more guard dogs.
The guard dogs need to exist not only to protect the locks, but the keys, too. If you go to unlock a door, a thief can knock you out and steal your key. Each aspect of our digital access needs guard dogs. We can no longer accept insecure communication methods, nor insecure computing platforms, to exchange our authentication. I think the real challenge going forward is rethinking how we process data altogether.
But U2F is really only a stopgap technology designed to provide a better mechanism than SMS or TOTP. There are still difficulties users will find with this mechanism that are problematic to secure or make less cumbersome, slowing adoption and security in general. And U2F still has several attacks that will work against it, making it somewhat trivial for malware to take over an account.
I envision a future where not only are there many factors we can use to authenticate, but that we might never need to "reset" our accounts again. That the majority of attacks on the user could end, and that servers will be more resilient to both general attacks and specifically data exfiltration. And that the data we use to secure accounts on the server can't be reused. An almost secure technological world.
This requires implementing strong security measures in all of the computers we use today. It also requires the adoption of universal multi-factor authentication methods, and a methodology to protect them from abuse by attackers. You can't get there by tacking more complicated mechanisms onto computers that are already not secure.
The FCC or corresponding body elsewhere should mandate that phone networks and phones support a secure messenging protocol which could guarantee that a message could be sent to a phone number and only be received by that device.
Password-only authentication is like locks on luggage, even with best practices.
We've already got TOTP (RFC6238) and U2F. At this point it's just a matter of adoption.
From the article a user choosing a random (i.e. not in wordlists) 8 character password with upper/lower/numeric characters could expect an attacker to take 3 years to crack the password (and that's attacking one hash!)
Now to be clear, I totally think that passwords are a bad idea (mainly because humans aren't well equipped to choose and manage large numbers of random strings) but I don't really see why this article advances that concept?
A very motivated attacker, or one with a sophisticated set of wordlists and masks, could eventually recover 39 × 16 = 624 passwords, or about five percent of the total users. That's reasonable, but higher than I would like.
They've then eventually got access to 5% of the user's passwords, and the one's they got access to were all based on dictionary words...
Assuming that the site has any level of reactive/detective controls, they've noticed the breach and invalidated the passwords, thus rendering them useless.
What I'm saying is the offline password cracking times demonstrated in this article don't seem to indicate any more weakness in the use of passwords than was already known. the percentage of attackers who are going to bother with hitting a PBKDF2'ed password database on a forum site with any level of dedicated cracking past a run with some dictionaries, just isn't that high.
Attackers are drowning in existing compromised password databases already many/most of which exposed clear-text or weakly hashed (MD5/SHA-1 with no salt) passwords, so realistically speaking the incentives for getting another set where you really have to work hard to get them, isn't likely to be that high unless it's a high value site.
If you want examples just look at the lists on https://haveibeenpwned.com/ 500 million accounts with cleartext passwords from a single dump is at the top of the list.
That said, I admire discourses efforts to move the bar higher by increasing complexity and blocking weak passwords, all helps people move away from less secure alterntives
Easy targets will always been compromised first.
Where I think many/most applications would benefit from more security is in detecting/reacting to attacks.
Most apps have no controls in this line at all, and make an attackers life very easy in that they can keep trying vast numbers of attacks without being blocked by the application.
There's been some decent foundational work done on this by things like OWASP AppSensor (https://www.owasp.org/index.php/OWASP_AppSensor_Project) but I've not seen many applications actually implement the guidance...
Ie, we often describe breaches as "really bad" but it would be good to quantify in terms of things like:
- Revenue Lost (Company)
- Reputation Lost (Company)
- Time Lost (Company + User)
- Increased Costs and Penalties (Company)
- Assets lost (Company + User)
That's just diagonals on the QWERTY keyboard.
Apparently that's an in-game password for https://en.wikipedia.org/wiki/Parasite_Eve_II
Commonly used passwords can be pre-hashed and easily cracked.
> Users cannot use any password matching a blacklist of the 10,000 most commonly used passwords.
If you do have a hardware security module, then why not just do away with hashing altogether, encrypt the passwords with AES-128 and you'll likely be fine (as long as the attacker can't extract the key from the HSM)
It's not unheard of for something like a decommissioned database backup to wind up insecure and on the internet without being properly wiped, causing a whole-db leak without anyone actually breaking into a production system.
Not sure it's worth the effort:reward though