As a rough estimate: a $2k bitcoin miner can do 2^45 SHA-256 hashes/sec whereas your $2k laptop can do 2^16 hashes/sec; the attacker has ~a billion x advantage over you that can be multiplied based on their funding. At that point, doing even 10,000 PBKDF2 hashes may not make much of a difference.
argon2, scrypt and other memory hard password hashing algorithms reduce the orders of magnitude advantages of the attacker by requiring RAM. Attackers might be able to purchase RAM cheaper than the defender, but nothing close to a billion times cheaper.
Addressing concern #3 (want to have a password set on a laptop that decodes in a reasonable amount of time on a low-end smartphone), you could restrict the RAM to some small amount (like 256MB) if you anticipate needing to use a low-end device. This will still be a vast reduction in the attacker's advantage over PBKDF2.
Someone using an FPGA or ASIC can dedicate almost all of the die to creating lots of SHA-256 units.
You're off by about 20 orders of magnitude (the joy of binary exponents).
From what I can see, a more realistic estimate for a single core on a desktop is in the 2^26 range. Keep in mind that the PBKDF2 defender is single-threaded by design, whereas the attacker is not.
This still represents ~a half million x advantage for the attacker/$2k they spend. For $1m, they can guess passwords ~200 million times faster than you.
If we assume memory is the limiting factor for argon2, then even if a specialized attacker can use it at scale for 1/20th the cost, a 20x advantage is much better for the defender than a 500,000x advantage.
> What is crucial here is that we don't want to have the defender stuck with a slow implementation of something that the attacker will have a highly optimized implementation for.
Even if every BW vault leaked, if it takes half a day to run through 8 a-zA-Z0-9, it's not practical to do that for every vault. On the other hand, if I'm being targeted, even increasing that to a month wouldn't really matter.
Every "critical" site I use also supports u2f 2fa, which I've turned on. So even if they got my passwords, there's the 2nd factor they don't have.
tl;dr: Just use a damn password manager, even one that has arguable issues such as this improves the average person's security by orders of magnitude.
What US bank do you use that supports U2F, or do you not include banking in "critical"?
It might not be quite as good but email 2FA behind U2F protected email gets you pretty close
The same physical device constructs confirmation codes (proving I know the PIN) for specific inputs like if I want to send money somewhere I've never sent it before or a much great amount of money than usual.
However unlike U2F or its modern successor WebAuthn that's still in principle vulnerable to phishing, if thirstdirect.example pretends to be firstdirect.example and I don't notice, the codes I give to the wrong site work on the real one.
They are not congruent.
WebAuthn can replace the entire authentication, because it can perform multi-factor authentication locally and then send a claim to have done so, optionally backed by attestation from a vendor saying they promise the multi-factor authentication is done by their product. For example an iPhone can have one press sign-in to web sites or apps using this technology.
- using longer passwords (or salts) is better than increasing the number of rounds
- having the same database on different devices (top-CPU x older cellphone) have impacts on the performance for the user but not for the attacker (as a powerful hardware will be used)
Seems fair, for the average user. And the top user will prefer a longer password anyway.
The best most people can remember as a password, is some variations on common words and their date/place of birth.
Hence it doesn't matter what algorithms a database is using, computer will crack most passwords very effectively, provided with common words and minimal rules.
The only solution to secure against cracking is to have way more complicated passwords (very long), but people can't remember them.
In this role, you very much have a practical option to just pick a decent password. "stonks" is not a good password, the second one is a Rhianna lyric ("... but chains and whips excite me") but the third one is a pretty obscure reference and it's neither likely that your adversary would "guess" it nor that the sort of brute force attacks envisioned would hit this random looking 13 character password.
Anyway, even as a password hash I've made the argument previously that stronger hashes only marginally improve things, far too many of your users will pick "stonks" or if you insist on eight characters maybe "stonks!!" and even if you have Argon2 tuned way up the attacker can reverse that because it's too obvious. If your users picked unique random passwords (as they might with tools like Bitwarden or 1Password, even though I personally use zx2c4's pass) then it doesn't matter if you use a terrible hash like some turn of the century PHP forum using MD5, because that's still safe with such passwords.
Argon2id with t=3, m=92MiB, p=1 should be slower than bcrypt with cost 12 on most modern GPUs. That's not actually all that much memory, you can handle quite a few concurrent logins on a server with those settings. 10 concurrent logins per GiB of RAM dedicated to the task. And it should only DOS the login process, so it might make taking out your auth process (or server) easier but won't necessarily harm any other part of the site.
It's reasonable to expect an attacker to have a SHA256 ASIC given Bitcoin making those a commodity
Potentially this can be offset if your server has https://en.wikipedia.org/wiki/Intel_SHA_extensions but if you're running on cell phones that's not there
See also https://bitcoin.stackexchange.com/questions/36253/what-minin...
Yeah, over 20 years ago. It's woefully out of date by modern standards.
PBKDF2 doesn't even attempt memory hardness, so there are whole classes of attacks on later generation slow hashing algorithms that don't even apply to PBKDF2 because of how old it is. Argon2 is extremely resistant to Time-Memory-Trade-Off (TMTO) attacks, which older algorithms like bcrypt and scrypt are vulnerable to.
PBKDF2 is essentially a linear slowdown, which is effectively pointless these days.
bcrypt and scrypt, the successors to PKDF2, are both more than a decade old.
RSA is half a century old and it's still up to date by modern standards. In fact nobody has came up with anything better.
Edit: Actually, bcrypt might be as far as 1999, possibly older than PBKDF2.
RSA is currently a minefield of gotchas and few security companies even get it right. Just generating a good key is actually a very difficult task. It is also very computationally slow and has many practical issues for the level of security it provides.
There are many superior replacements in both pqc and elliptic spaces. I would take EdDSA over rsa-pss any day of the week.
EdDSA is another level entirely. I don't know how you can recommend elliptic curve cryptography with a straight face if you think RSA is hard.
P.S. It's a myth that EdDSA is faster. This depends on operation (signing vs verification) and key size.
The private key is a byte string and the quality of it only depends on the random generator. It’s trivially fast to generate 32 bytes of decent quality random numbers these days. There are many insecure rsa generation methods with weak criteria. Too many are fossilized in libraries and crypto cores. Rsa also has half a dozen padding schemes and most are now considered weak or vulnerable.
EC is generally considered much stronger for a much smaller key size.
Why does it have to be linear? Just use 1M irritations today, 2M irritations next year, 4M irritations the year after that, and you'll have an exponential version of it.
You can even take your 1M irritated hashes from this year and execute an additional 1M more irritations on them to update them to 2M irritations when you want to upgrade.
In practice, bitwarden's server limits the max iteration count on a user account to something that remains insecure. They refuse to fix it.
Having 2fa enabled on all other accounts it makes me sleep better if somehow one day BW or any other password manager gets compromised.
This has nothing to do with my usage.
255! vs 255! / (255 - 30)!
My math could be off though, i haven't work with factorials since i was in the university
With unknown size, cracking 30 characters takes time proportional to n^30 + n^29 + n^28 etc.
Cracking just 30 is proportional to n^30.
The difference is negligible. A percent or two.
As for Bitwarden's implementation: it doesn't send the password to the server, it sends, basically, a PBKDF2 hash, which is different from the one used for encryption. The leaked hash can't be used to decrypt the database unless it's bruteforced. However, the protocol is not ideal, there's a weakness that I wrote about here: https://dchest.com/2020/05/25/improving-storage-of-password-...
AFAIK, 1Password uses SRP with PBKDF2 for verifier.
No, there is an important difference: leaking this verifier does not let an attacker to impersonate user at will. See my other message .
Is my understanding correct that you derive two keys from user password, one used for authentication and one for decrypting encrypted content which does not leave the user's computer? In that case, yes, it's somewhat better than the typical scenario, though I personally would still prefer if a proper PAKE was used for authentication. It may not apply to your service, but leaking encrypted data can still result in exposing certain meta-information, which may be important, so it's better to be extra-safe in such matters.
The same happens with PAKE during registration, where the user will need to provide the verifier. Since everything happens over TLS anyway, and hopefully, with pinning (in apps), this is not a huge concern.
I'm not against PAKE — the biggest benefit is that you don't have to create your own protocol and make mistakes. What I'm saying is that for such use cases its security benefits compared to other protocols are negligible.
In a typical PAKE, generated challenge depends on random values generated by both server and user, so if at least one of them is not controlled by an attacker, the generated challenge will be different each time. So leaking the verifier or eavesdropping on previous logins does not help an attacker to impersonate user in any way.
When retrieving the vault, the program will generate the secondary key, provides it to the server(You can add 2FA btw), the server sees that indeed it matches one of the ID's and sends you back the encrypted vault. decryption is done locally.
I don't see any issue with this process, maybe you care to point the attack vector\scenario
You can add 2FA to the authentication process.
All metadata is obviously encrypted, the data is just a single blob. you maybe could try to guess the number of entries based on size but that's too dynamic as well.
These products were attacked before, decrypting the vault wasn't how they were attacked.
It was with bugs\vulnerabilities within the browser extensions that lead to data leaks. something else entirely.
Citing the OP:
>Sure, you might tolerate a longer unlock time, but is the security gain really worth the cost to your battery?
I think the battery concern is over-blown. How often do you login into a service? I think that for typical use-cases, amortized battery-cost of a login is negligible. And for other use-cases you can let users choose.
No. Master key doesn't leave client machine, only its hash is transmitted over the network. See dchest's link above.
I don't see any issues with that.
Via the master-key, the program derives(locally) the key to encrypted the data and a different secondary key for authentication against the server. without knowing the master-key you can't decrypt the vault even if you were able to trick the server into sending you the vault.
The vault is decrypted locally
Like what? It looks like personally my phone is 4x slower than my desktop. So if I calibrate for one or two seconds on my phone, the security should be pretty acceptable. Do I need to worry about devices much slower than that?
I don't think a 3s wait for a session, for greater security, and that on an unoptimized device is going to be breaking UX.
The submitted github comment also makes that point, this is actually hard to do.
Choice is always better. for people who care\worry, they can change to something more resistant to cracking.
Users will forget their Master Passwords even and because they forgot them they will believe they've been "hacked" and blame you.
Users on Hacker News and similar sites where users actually understand the underlying technology to some degree are the exception, not the norm. Adding options does not help a vast majority of the user base and it complicates your codebase further. Imagine making that change and less than 1% of your users actually use that feature?
You're thinking wrong.
argon2 has nothing better to offer. Practically there are 3 argon variants to chose from and they all require careful configuration. It's pretty hard to start with, assuming you can find libraries for it in the first place, last I checked it wasn't commonly supported.
It's a perfect example of theory versus practice. Argon is a researcher's wet dream, ideal by some algorithmic definitions, yet it has no benefits in practice.
Modern GPUs and ASICs can perform millions of SHA operations per second, even with a poorly configured Argon2, you reduce that massively.
If you were to leak your company database with 1 million customers and hashed passwords, there's some theoretical considerations to be made on resistance to GPU and ASIC cracking, practically you're in a pretty bad place whichever algorithm was used. ^^
P.S. Cryptography would have more weight if half the passwords weren't a variation of password2021 and hunter22.
But you can. It’s literally just N times the hash. Typically the number of iterations is chosen to be somewhat slow on the server that derives it. But a specially designed rig can execute this with extreme parallelism and speed.
That said, asking the user to manually type full-size generated keys between devices is simply a nonstarter.
But what if the user stored their passwords in a private Matrix room? Matrix' solution to sharing encryption keys between devices is by being the communication channel by which users approve new devices from existing devices; upon approval the room's encryption key is sent to the new device encrypted using the new device's public key. That is, the room key can only ever be seen by the devices themselves. (I think this is a reasonable summary of encryption in Matrix, please correct me if I'm mistaken.) This is basically using a Matrix room as a general distributed, encrypted data store. Thoughts?