All Windows passwords shorter than about 10 characters shouldn't be considered secure, as the NT Hash at this point is so easily reversible that it's basically a "light obfuscation" at best. A single GPU can crack all 8-character passwords in minutes. The single best security setting on a Windows network is to increase the minimum password length to something like 14 characters. Use 20+ for privileged or service accounts.
The second best thing to do is to scan password hashes against "top password" lists and reject any that are in the top-N, where its up to your business policy what 'N' is. I recommend at least the top 10,000 most common passwords being outright rejected.
The third thing is to match against specific leaks. E.g.: if you have email@example.com and there is a leak of his email and password where the password matches your records, force a password change immediately.
All of the above assumes that MFA is in place, your servers are patched, and there are extensive audit logs on all authentication attempts.
The attacks you need to defend against are:
1) online password guessing, 2) Kerberoast, 3) cracking NTLMv2 authentication handshakes, 4) cracking DCCs (Domain Cached Credentials).
1) is solved by applying a moderate list of banned passwords, a sensible lockout threshold and MFA for things on the internet.
2) and 4) are only against high privileged accounts, as they imply access to a low priv account already. (You need an account to request kerberos tickets and if you can read DCCs, you usually can compromise at least a computer account). High privilege accounts should never be accounts that you log on with before you can access a password manager, so they should simply use a 16 character randomly generated password. Make this a requirement in your org, perhaps crack passwords yourself regularly to confirm.
So you're left with 3) which is effectively salted and orders of magnitudes harder to crack compared to NT hashes.
I talked about this at Troopers; unfortunately the video is not yet available. http://troopers.de/downloads/troopers22/TR22_BetterPasswords...
Each "console" is a "seat" sort of like a PTY emulating a serial connector. Whether you're hands on keyboard or using a remote desktop connection, your login session has one kerberos ticket which is used for authentication automatically.
Nevertheless, did GP mean Kerberos tickets by "domain credentials"? How does Kerberos prevent the use of password managers? I'm confused.
For legacy reasons, I assume.
> Is it still the default?
> Can group policy be configured to tell everything to not used to disable NTLM everywhere?
It can, at least for domain-joined Windows machines. Most environments can't afford to disable NTLM though, because some legacy systems rely on it. However, Microsoft recommends disabling it.
> And can't AD be configured to disable RC4 everywhere?
> Do MS ever plan to properly deprecate NTLM/RC4, disable it in new domains and start displaying prominent warnings when they're enabled?
I'm not aware of such plans. If I were to guess, then I'd reckon they want everyone to move to Azure and let onpremises AD die.
I’ve wanted to do something similar but how would you do this without direct access to HIBP’s data?
I don’t want to send customer email addresses to a third party, at least not without a contract.
This makes it possible to use the API without worrying about user privacy as long as you keep checking the email addresses, even after indicators of compromise. Suppose firstname.lastname@example.org is in the list is the only email address with 0xABCDE as the start of the hash, and the full hash being 0xABCDE0000; now email@example.com happens to have the hash 0xABCDE1234. You transmit ABCDE to the backend, but even with the unique five letters of firstname.lastname@example.org, how can the server possibly know if you're checking email@example.com (a listed entry) or your own address? The server provides the list of hashes (only 0xABCDE1234) and you verify on your end that 0xABCDE1234≠0xABCDE0000, proving that your email is unlisted.
On average, the returned amount of hashes is claimed to be 381-584. That means that even if your customer is on the list, they're at best one of the ±478 hashes (on average) in the response if they're even listed at all!
The full explanations is here: https://www.troyhunt.com/understanding-have-i-been-pwneds-us..., it's worth a read if you're interested. In practice, I wouldn't worry about your customers' data in this case, because you're not really sending anything identifiable to a third party.
There are less privacy conscious APIs to HIBP, but the range search is probably the easiest and most private way to get the checks done.
All of that said, I use random passwords for every service I use and I'd very much like to opt out of your auto reset system. I'm in favour of adding the feature (more websites should, in my opinion!) but I don't like to be forced to change my already unique password if I don't have to.
You don't use email addresses at all with Pwned Passwords. The documentation says "first 5 characters of a SHA-1 password hash", not email-address hash.
HIBP does not have a way to search for which password hashes are associated with a given email address, as this would be far more useful to attackers than to victims. The only data that Pwned Passwords exposes is a list of password hashes and the number of times that hash was used. The expectation is that even if that leaked password was actually for someone else's account, you still shouldn't be using it.
Still, though, you can implementtthis check every time someone logs in (and the password is transfered over the wire) which should catch most bad passwords/password reuse cases.
Newer computers also usually come with a TPM, which allows you to not have to type the password every time. If the PC doesn't have a fingerprint reader , it can use a shorter PIN.
 I know a fingerprint isn't a password, but for protecting low-profile individuals who aren't a target for actual data theft (as opposed to opportunistic old-fashioned property theft) it's likely good enough.
TPMs are not unlocked if they can't validate the boot chain (live cd), so you'd need the disk password (and full user password).
It's still possible to only use a local one, but it's in an unexpected place, so I expect most people to go the online route.
their reason for this is that you need to save the bitlocker recovery key somewhere, and they don't trust the users to do it properly (not even mentionning the UI for this would be horrendous) so it saves it to OneDrive.
The workaround is to click "connect to my work account", then "domain join". Not Azure AD, but regular AD. This then presents you with the classic offline account creation dialog. It doesn't even ask you what the domain is.
Regarding secureboot, I went through the pain of configuring it under Linux (creating and importing my own keys), before realizing it was of little use without a TPM. Turns out both Windows and Linux can't "own" the TPM at the same time, IIRC (work laptop has a windows partition). I ended up learning my randomly generated >15 char disk decryption password by heart.
I'm not sure what you mean by "legacy mode", but I'd expect that to mean "BIOS compatibility mode", and that's not really related (apart from presumably disabling secure boot). I actually prefer UEFI, this allows me to avoid wasting time with a classic bootloader.
> nothing so private in my machine
In YOUR machine. Chances of someone abusing the data on a stolen/lost laptop is small, but not non-existant.
Not to mention insiders that are bad actors.
And as long as employees are warned in advance, they should be aware of the risk of re-using passwords, which already exists today. If anything, this highlights the fact that if employees are using their company password for some other service, they’re placing their employer at risk.
I generally use unique passwords for everything, but I worked many years at a company with a 3-month password rotation policy, and coming up with high-entropy yet memorable passwords was sufficient work that many accounts on machines on my home network used some retired passwords from there.
"Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets. Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator."
- NIST SP-800-63B Digital Identity Guidelines - Authentication and Lifecycle Management
A large part of me wishes they made this a SHALL NOT. It would have caused chaos with other standards, but it would have been the right thing to do.
In my mind, there's no world where one could make a biometric scanner that couldn't be spoofed (presumably with an arduino USB interface) and then when all these corporations with the worst security (Facebook, Experian, etc) leak my data, can't anyone log into my account?
A number of states around the world have my fingerprints too because I entered those countries as tourist and I had to put at least one finger on a reader.
Maybe some country included mine also have my retina scan, I had to look into some cameras sometimes.
All those biometric information could be leaked, sold by corrupt civil servants or exchanged with other countries so random passwords generated by a password manager protects me more than biometric information. Am I wrong?
Of course some site could store and share with whoever they want my cleartext password before hashing it but I use one different password per site.
I know of zero biometric implementations where your biometric data is uploaded to the server for verification. All the biometric implementations I've seen (windows hello, icloud passkey) perform biometric checking on device and send cryptograms to the server, which would be as secure as random passwords.
However, even worse than that, your fingerprint in particular is something you leave literally everywhere you go. There was even a demonstration of someone copying Gerhard Schroeder's (German PM) fingerprint from a still photo of him from a bottle he had touched, and then creating a mold which fooled a sensor they had access to.
That requires you to get physical access to the device, which puts the attack in an entirely different realm than just "password cracking".
This is one.
1. The attackers create firstname.lastname@example.org / email@example.com and/or use a throw away phone number (especially if the email provider uses some 2FA linked to a phone.)
2. They register an account on a web service using that email or install an app on that phone, maybe a virtualized one. Upload a picture of me as icon or fake one.
3. Use my fingerprints on their phone to get through any possible biometric 2FA.
4. They are me.
If they find a way to automate all those steps or make the labor costs small they can register a lot of bots that are real people, because 2FA says so. It's up to their imagination to find a way to profit from that.
We have id card, which contains client authentication certificates. The procedure on acquiring ID card is the same as passport and carries the same legal power. You have to show up in real life and they take your fingerprints, photo and issue you ID card. ID cards will actually be mandatory for everyone beginning 2023-01-01 - up until now they are optional but very much favored around my circle. There is a fair amount of stuff you can only do with ID card (remotely):
- Set up smart-id for 2FA for banking app in your smartphone. No, I don't have option not to use 2FA.
- Official communication with .gov entities.
- Signature & timestamp service
- Remote notary services (requires video presence and showing ID card additionally to actually using it to put digital signature)
- Logging in various sites (banking, government entities)
- Recovering from lost second factor at national TLD DNS registry.
This is the ultimate authentication mechanism that services use to allow you to perform so much.
To authenticate & put down signature, you must use dedicated PIN code for each of those operations. And of course you must possess the card (use card reader).
CA issued GUIDs unlocks the Translucent Database technology, enabling all PII to be encrypted AT REST at the field level.
Translucent Databases 2/e: Confusion, Misdirection, Randomness, Sharing, Authentication And Steganography To Defend Privacy Paperback 
PS- Just spotted ftrotter's question for the first time. I also worked in healthcare IT and prototyped a PII protecting schema. Alas, my POC also flew like a lead zepplin. No password recovery. This strategy requires GUIDs, aka RealID in the USA.
"I am building an application with health information inside. This application will be consumer-facing with is new for me. I would like a method to put privacy concerns completely at ease. As I review methods for securing sensitive data in publicly accessible databases I have frequently come across the notion of database translucency. ..."
I could have written that. Oh well. Someone in much the same situation, having the same questions, and then reaching about the same answer is somewhat validating.
10+ years later, I'm sure there's now dozens of us advocating Translucent Databases techniques.
Are regular smartcard readers compatible? Does the card have NFC for phones? Can you use them under Linux/mac? Do regular browsers work with it? (FIDO/webauthn).
Or is the card reader a standalone device, like my bank uses, where you key in your PIN, and it gives you a one-time code, or a response to a challenge?
No NFC. Don't think smartcards has anything to do with FIDO.
As for phones, there is an application that has to be setup using computer/smarcard reader and from then on, I can use app to authenticate & sign.
Biometric information leaks through other means, and if you rely on it for security, you are letting a lot of people in.
Biometrics without the other two doesn't help anyone.
Not quite. IBM has (had?) a research program on "cancelable" biometrics. I do not recall perfectly, but I think they were tweaking the encoded biometric sensor data before committing it to DBs. If there is a leak, one can redo it with a new tweak (like a new salt or nonce).
In the Netherlands, the police can hold your finger to the fingerprint reader on a device they confiscated (might need a court order, or might depend on circumstances if there is an imminent threat to life or something), but they cannot order you to work on your own prosecution in general. Why, then, you can be ordered to put your finger on the pad, I have no idea, but it has been ruled that you cannot be ordered to tell them a password.
Then again, the secret services have been allowed to order giving up passwords since forever.
left index followed by
left ring followed by
left middle => Wipe
That's why you can't just change phones, and then login with your fingerprint without setting everything up again.
It’s also more secure than a password on a phone because if you’re using it in public someone can watch you type your password in.
Of course, someone might be able to clone your head shape.
This is from 2005:
> Police in Malaysia are hunting for members of a violent gang who chopped off a car owner's finger to get round the vehicle's hi-tech security system.
I guess this is a question of threat model. I hope nobody would want to chop of my head just to unlock my iphone. But this always reminds me of the scene in "Demolition Man" where Wesley Snipes spoons out someone's eyeball to open the biometrically locked door of his prison.
> It’s also more secure than a password on a phone because if you’re using it in public someone can watch you type your password in.
I'd rather hold a hand over a PIN pad than having to wear a mask to prevent my face from being scanned in public.
Even if you could write your sensor's face data into someone else's phone, you still wouldn't be able to authenticate with it, because it doesn't have the same sensor. It's not just different keys, the fixed layout of the IR pattern is different.
> I'd rather hold a hand over a PIN pad than having to wear a mask to prevent my face from being scanned in public.
And not sure what the actual threat model here is, but I don't think strangers can scan your face in a way that's useful to Face ID. (Wearing a mask doesn't stop general identification technology, it doesn't even break Face ID anymore.)
> Even if you could write your sensor's face data into someone else's phone,
Since the keys presumably aren't retrievable from the hardware, it doesn't matter if there are random or intentional production flaws in the sensor itself: you need the original hardware anyway. You just need to trick it into doing the authentication. That's the part where biometrics are involved, the part where you present it with a username so to say. The rest is private key authentication.
> In December 2009, RockYou experienced a data breach resulting in the exposure of over 32 million user accounts. This resulted from storing user data in an unencrypted database (including user passwords in plain text instead of using a cryptographic hash) and not patching a ten-year-old SQL vulnerability. RockYou failed to provide a notification of the breach to users and miscommunicated the extent of the breach
Famous, ships by default, agree, but actually used? It's really low quality, I've mostly seen it used for CTFs: because it is so common, the organizers / challenge makers think picking a password from this list is fair game for a challenge where the trick is to crack some user password hash without requiring proper cracking hardware. In the real world, it can be a starting point but it's not really used much anymore.
Things like the linkedin list and newer lists are more accurate, especially when combined with rule sets that add additional transformations (add an(other) exclamation mark to a password, change o to zero, combinations of these things, etc.)
The problem "in the real world" is that people will lose these keys all the time. I mean, I agree, passwords need to die, and hopefully some of the work that is being done by Apple and others will help bring on an end to passwords, but you can't really talk about replacing passwords with FIDO keys without talking about how to deal with account lockouts, which is a real, hard problem.
Similarly, biometrics may be good for a user ID but they make horrible passwords. These days fingerprints and irises can be copied from photographs.
Which isn’t to say that you shouldn’t go with the new solution anyway. But I’m always skeptical when all people say is “it solves all the existing problems.”
Authenticator app, HID card, or FIDO key. Biometric is coming but the goal is to not have to give people yet another reader/device.
In theory we wouldn't have to worry about someone losing their card or key but they don't always setup all three in their account.
He said he didn't know how it would be done securely.
I see a lot of attacks are due to account takeover and we currently seem torn between allowing an attacker to circumvent the 2FA by account reset or leaving someone unable to access their account for ever.
I started scanning 2FA codes into two phones, my main one and one that I leave hidden at home (and switched off) for backups. Knowing my luck though, I'll ned to access the one that I forgot to scan into the second phone!
As for the biometrics, when people talk about biometrics for authentication, they are usually talking about using the biometrics to unlock something stored securely on a device. Without the device that has the actual credential being used, the biometric that has been copied doesn't do attackers much good.
Edit: oh crap
Hey, waitaminute! How did you know that my password is "BingoBingo77".
 It shows as •••••••••••• to every one but me?
For anyone out of the loop: http://www.bash.org/?244321
However, Windows Hello in Windows 10 does not support local logins with security keys. This may have changed last week with a recent update, but it definitely wasn't supported when I installed Windows last Christmas.
I think it's Microsoft's opinion that security keys are too secure for consumer use; if a consumer is locked out of their personal device due to mismanagement, theft or loss of a hardware key, that's a support headache and liability burden that they're unwilling to take on at this point.
>In fact, pretty much the only case where complexity and length matter is when we’re defending against offline password cracking. But for every other case in the threat model where passwords are stolen, length and complexity simply don’t matter.
The idea is that most passwords are stolen when they are plaintext. So it only matters that the password is unique to that system. Offline password cracking is relevant for cases like the passphrase used to protect your PGP or SSH keys. Then length and complexity is important. Stuff like the suggested FIDO is the same sort of thing. If you need to protect the FIDO key information then length and complexity of your passphrase is important where offline password cracking is relevant.
Ah the good ol' bus factor.
The linked Diceware website run by the daughter has press links about the $2 passwords she sells.
The FAQ notes the passwords are $4 a pop.
The actual price: $8
Paid premium extends this to 21 characters.
> We manually checked and attempted to remove as many profane, insulting, sensitive, or emotionally-charged words as possible, and also filtered based on several public lists of vulgar English words
I kind of wish they had a list _without_ this step though. Vulgar and emotionally charged words are easy to work into stories and easy to remember.
Sounds exactly like the advice an offline cracker would give. ;)
For example, the primary threat model for my mobile device is a combination of shoulder-surfing and theft, because I ride a lot of public transit. So it's way more secure for me to touch the fingerprint sensor rather than constantly peck in my password while I'm being observed. A common criminal or homeless dude who steals/finds my phone won't know my password because I'm not revealing it, and they're unlikely to have access to my finger or its print.
If my threat model were different, say law enforcement/TSA confiscation or something, I might be more worried about walking around with fingerprint auth enabled. So if I head to the airport or enter some other high-risk area, I might consider disabling that, removing the sdcard and/or SIM card temporarily.
Biometrics as a way for my personal device to recognize my physical presence is mature tech, and useful for consumers in ways that passwords aren't.
This is less secure against dedicated attackers with physical access, but much more secure against remote attackers as there's usually no way to provide the biometrics to the HSM in software and the authentication key from the biometric device can't be stolen so you must keep persistent access to it to be able to use it every time you need to authenticate.
Things like FIDO Yubikey are basically a password unlocked by biometric information so someone needs the key AND your biometric information to unlock it. Even if someone knew your "biometric" information, they would still need the key.
Interesting way to incentivize their daughter to do something.