I don't think their scheme results in different keys for each site, but I could be wrong.
var cipher = crypto.createCipher('aes-256-ctr', key.toString('hex'))
A single hard-coded salt for key derivation:
const key = crypto.pbkdf2Sync(auth, '0945jv209j252x5', 100000, 512, 'sha512');
Can everyone who is developing crypto apps Just Use NaCl/Libsodium?
So for the hundredth time, if you're not a cryptographer or experienced security engineer, please stop releasing and promoting your crypto-related projects before they have been vetted by someone who is. If this is something you intend to release, ideally run the basic idea by someone qualified first. By not doing so, you are doing active harm. Someone's life and/or liberty may very well depend on the software you write, and when you fail them in this regard you are ethically and morally responsible when these things are taken from them.
Thomas responded alongside this comment to talk about how academic cryptographers are not necessarily qualified to implement original crypto, and I largely agree with that; however, I don't actually think that's the issue here. Rather I would pin this on a lack of peer review.
I could be wrong, but I don't believe the author of this paper has had it published or at least accepted in any journal or conference proceedings. Being an eprint format with endorsement rather than peer review, you can expect mistakes like this to happen often, even if the authors are ostensibly qualified. When you submit original research for publication you generally go back and forth a bit with adjustments as needed, and as long as there is nothing egregious you don't need to redo it all.
In this specific case, I believe the author fully understands the issue (or would, were it presented to them) and is fully capable of fixing it. A qualified peer review would (hopefully :) have caught this and other latent issues if an HN commenter did.
We see this in the broader mathematics and computer science communities, and we especially see it in sub-disciplines like machine learning as well. It's absolutely true that academic cryptographers should not be assumed capable of rolling their own crypto a priori, but in my (educated) opinion I would certainly place far more weight on crypto developed by an academic cryptographer than a software engineer without any particular training.
My platonic ideal for someone who is capable of developing original crypto is something like an academic with a PhD in math or computer science (focusing on crypto), who can develop software very well and who joins an applied lab for crypto engineering and development (like NCC's) or a top cryptanalysis firm like Riscure. Failing that, I'd probably place the most weight on someone who had a lot of training in crypto engineering or practical cryptanalysis over an academic with no implementation experience.
(I apologize if any of this is patronizing, I don't know what your background or familiarity with the academic process is w/r/t peer review, etc).
Would the qualified peer-review necessarily be reading the NodeJS code, or just checking the theoretical soundness of the paper? I'm not so certain about the former...
People seriously need to stop rolling their own crypto.
1) why CTR mode was chosen? I would probably go with something like GCM: privacy + integrity check.
2) IV ideally should be re-generated on every re-encryption. It doesn't have to be secret, but has to be random (securely random).
The secrecy/predictability/uniqueness rules for IVs and nonces depend on the specific cipher mode you're using, so be careful about writing generic recommendations. Also, be very careful with the word "ideally", because if you get an IV or nonce wrong, chances are your problems are much worse than "not ideal".
My biggest problem these days is dealing with sites that don't allow 30+ char passwords with full range of special characters. Almost exclusively, banks.
I sent a strongly worded email to a handful of people at the bank asking them if they really thought it was a good idea to teach their customers that its okay to click a link in an email claiming to be from their bank and provide that much personal information. I never got more than a canned response back, but several months later they did overhaul their password reset procedure to a more typical one.
"Oh, we changed your username as a security measure."
Legacy software or partner integration requirements are almost always the reason for arbitrary changes within financial institution.
Anecdotally, I've never seen a scenario myself where this wasn't referred to as a "security measure" or similar. This includes within the institution to all non-technical employees.
Not saying the argument is valid, just that it may be the reason.
But OK, I assume they still pass whatever dry-run test procedures exist. But if it is true that they are based on floppy-disks, then I don't think sticking with them is a net win for reliability and safety.
Bits rot, especially on floppies. So if they are still using them they have created procedures for refreshing the bits from backups. By now the floppies might be little more than ceremonial objects inside the systems that really determine the launch procedure.
And those systems would have evolved informally over time and might be ever-changing, poorly understood and poorly tested.
If there is a security issue with your card, HSBC Fraud department will send you a text, telling you to call them.
The text comes from an unknown number. The number you are told to call is not listed on their website anywhere.
At the same time they are sending letters to customers warning about how to protect from phishing attacks.
I've been trying to explain to them, that they are training their customers to accept phishing attacks, but they are having none of it.
Yes, I understand that they don't want to publicise their special number on the website - but at least put it on an unlinked page, so that if a customer visits hsbc.com and searches for the number it comes up in the results.
I really don't understand these people.
It needs to be written into law in all jurisdictions, that if a bank has been negligent in security, then when responsibility for a breach is unclear the benefit of the doubt should be given to the customer and the breach classified as a bank robbery.
I imagine a lot of customers are using the same number in each.
1. As far as I can tell, only the passwords are encrypted; not the entire database. This is a little annoying from a privacy standpoint; since it means I have to trust whatever cloud storage system I'm using with a list of every site I have an account on.
2. No decent browser autofill. Yes, there's browserpass, but as far as I can tell it requires manually searching your password database for the site you're on, which is somewhat inconvenient and doesn't help at all against phishing attacks.
3. No InputStick support. This means if I ever have to enter a password on a computer that doesn't have pass installed, I need to open my password DB on my phone and manually retype the necessary password instead of being able to just auto-type it.
4. No autotype on desktop - This isn't quite as big of a deal, since most programs will let you copy/paste passwords just fine, but as far as I can tell no pass desktop clients include support for auto-typing login credentials. For many non-web apps that require passwords that feature is extremely helpful.
If these ever get fixed, there's a decent chance I'd switch.
I may not understand what you mean by "autotype on desktop", but I use the dmenu password-store extension which is the executable "passmenu". It allows the command line option "--type" which is quite close I think to what you desire.
This seems like a fairly decent substitute if you're on Linux. Doesn't look like it'll work on Windows or MacOS though.
Also how do you store your password file and sync across devices?
And what do you use on iOS? I haven't found a very convenient workflow for my phone. No phone app I've found can keep synced with a password file that's stored on Google Drive.
For browser autofill I'm using the KeePassHttp plugin with ChromeIPass.
For InputStick I'm using Keepass2Android with KP2A InputStick.
Autotype is a built-in feature of KeePass on desktop.
Password database is stored on Google Drive, encrypted with a 2-part file and password based key. The file-based component of the key is stored locally offline on all my devices. Changes to the database get synced automatically by Google Drive on my PCs and by Keepass2Android on my phone.
I'm not sure about iOS, unfortunately. My phone is running Android, so I haven't really looked into KeePass clients for iOS.
Is that what you mean? It looks like it's written using Mono for non-Windows platforms. I might have shied away for that reason. What platform do you use it on? I'd love to know if it works well on non-Windows platforms (OS X, Linux).
Another option you might want to look at is KeePassXC. It's a fork of KeePassX, and it has built-in (though off-by-default) support for keepasshttp.
Linux: rock solid for several years now. We have several KP databases shared by 20 odd people that contain several thousand entries. I use it on Gentoo and Arch desktops. It runs under Mono plus a few extras to do things like sending links to browsers and autotype etc.
For autofill, KeeFox works well for me on Firefox - there's probably something similar for Chrome. I think KeePass will do autotype if you right-click on an entry but it's not a feature that I really use so I'm not sure.
I store my password DB in my home folder and use syncthing to synchronize it to my other computers and my phone.
I don't know about iOS but I use KeePassDroid on Android and it works pretty well.
The others are things I've noticed as well, and do wish to see implemented.
2. Right. I use rofi-pass on Linux which fills in the selected password or other login data without any clipboard action.
3. True. Never thought about this but sounds clever.
4. Same answer as in 2.
Plus I have a good password history through the git commits. Of course I push it just in my local network.
And I use more then just one password store. And inside you can folder based decide who can decrypt the passwords and who not.
2. Is that really auto-fill though? Seems more like KeePass's auto-type. I meant like a browser extension that knows what site you're on and fills in the appropriate username/password combo automatically.
4. Cool, that looks like a decent solution for auto-typing. I hadn't heard of that before. Linux-only though from what I can tell, so it won't work for me in the general case.
KeePass has built-in password history too, but I do really like the idea of using git for that; that's one of the reasons why I'm interested in pass.
Your comment is completely off-topic.
I'm wondering if there is a way to combine find and show with pass though. Like pass find <keyword> and show the password in one command. I realize I could probably use the grep command, but then I'd have to put the keyword into the encrypted file as well, and I'd need to decrypt all files to find it.
Edit: Gah, I was just thinking this should just be an alias, but the output of pass find can't just be fed to show either..
Actually, the more I think about it, I don't think I want this extension to do that...how does it know how long between when the fake fields are entered and when I press submit? Now I am going to read the paper...
Edit: Yup, the extension intercepts all network traffic even before you click submit. If you, e.g., hash on the client side this password manager will break. If you never click submit, this extension will continue to read all HTTP bytes from the page going back to the server it seems, looking for some strings...not sure the perf implications of this.
I like it because there isn't any third party or service for me to trust, but I can still have unique complex passwords for each purpose. It feels pretty much the same as having them all in my head.
Though it does syncing
I'd like to make the UI work better in general, but in particular on mobile devices. But then I have many projects I'd like to make time for, and this one is probably usable enough for me for now.
Your reply however... stamps paranoid label on it :)
There seems to be a growing chatter on HN and elsewhere about how you can't trust free software because you didn't audit/write it yourself.
I'm not sure if people actually believe that or use it as a way to point out how they're smarter than other people.
Sure its possible that the free software contains something malicious, but such an attitude is not constructive. If you only used software you wrote yourself you might as well not use a computer. There are degrees of trust, balancing of risks, and free software has a much better reputation of not containing malicious code.
It is in JS at least. Underhanded C is likely an easier trick to manage.
Basically, the story was that a program for grad research was inserting all kinds of nasty, anti-semetic things into text and it turned out the previous grad student had poisoned the compiler which was modifying the strings and was able to re-poison it every time through something else.
I forgot the exact details but it is an amazing read.
Then they aren't paranoid but normal folks, eh?
Idk about other people but I find anything I don't find security holes in myself "as good" as anything I've written. I've got the same set of assumptions/blinders/competence either way.
Any attempts to create a standard will result in several competing standards ending in this ubiquitous problem: https://xkcd.com/927/
People should be able to use their phones, Yubikey, TouchID, etc. as their authentication, without needing a password (except a master password for the phone/Yubikey/TouchID).
And yet I see no movement towards this from the industry. Are we stuck in this terrible state forever?
If I could see into the future I would dearly hope that in 10 years we are not seeing HN front page articles about the newest innovations in password managers.
If I make an integration with Yubikey or iPhone, then that works only for that style of authentication.
On the other hand, if I make simple username/password authentication, then all password managers in all browsers and with all devices will work for securely storing the authentication tokens.
The whole point is that sites don't (and shouldn't) make a decision about how the user should store all their credentials, otherwise some users would be prevented from using the password management process of their choice. Instead, they just ask for a random password and the user (or his/her devices) can store it however they want.
My concern is around having a single source of failure/attack. Which, to be fair, is probably not that different than most people's scenarios. That is, most people just let their browsers and phones remember passwords. So, in practice this already happens. I just don't feel like it is safe. I'm highly interested in being challenged on this.
If you use the same password (even a high-entropy one) on every website, any hack of a single site compromises you on every website.
If you use a password manager to store high-entropy passwords for every site you visit, then the password manager is a single point of failure.
Both of these attacks happen in the real world. But securing a single authentication system like a phone app/Yubikey/TouchID/etc. seems more tractable than securing every website that you visit.
Password managers are available today, but since the websites you are using don't actually know about them, the integration and ease of use are not as great as they could be.
My vision is that you can use a phone app/Yubikey/TouchID/etc. to create as many identities as you want, and then you can use these to robustly authenticate to any website. To the website you would just be a GUID: an opaque identifier that it doesn't know anything about unless you tell it more. And all the app/Yubikey/TouchID does is let the website know (securely) that you are the same GUID that logged in last time.
Which, I fully cede is also a single point of failure. Basically, I think we have plenty of weak links in the chains. I'm not too keen on codifying any new ones. If you have something that increases the strength of everything, I'm game to try it. And I am highly interested in studies that look into different strategies.
Edit: It seems people don't understand what I mean... I just put some code here to better explain: https://github.com/w8rbt/dpg
One is really not that much more secure than the other particularly when dealing with online websites.
But yeah if the site gets hacked you have to pick and remember an iterated password (ie a next iteration) or regenerate a new master password.
Of course if you are really paranoid you can always generate a password with dice and write that password down somewhere in safe place (ie offline password generation).
password = hmac(url++nonce, master_key)
Every time you need to change your password, (because it leaked), you simply generate a new nonce. You only store the nonce and the URL and this seems like a very secure scheme to me.
You can not change the master key after the fact, so make sure it's secure.
I'm not a cryptographer, so I'm probably missing something obvious
We still need to solve the "password constraints" problem. I
would say, we could add functions that given the password stored, deterministically creates a string from the password that adheres to some specific password scheme. For example, by using the password as a seed to a random number generator used to fuel traditional password generators like the one included in lastpass. The 'scheme' would also be stored next to the nonce and the url.
And if you end up using a database for storing your nonces and hashing schemes you'll end up with the same limitations and attack vectors the parent was complaining about.
You could argue that it makes offline attacks easier because many encrypted storage formats have a way to check if the decryption was successful or not but you could remove this feature if that was a problem for you. And at any rate using a strong enough passphrase would make this attack impractical in the first place. And being notified when you used a wrong passphrase is pretty useful IMO.
Is there a different solution?
For those corner cases you can have different password generators, for more complex constraints. But overall it's not a big hurdle.
I use this tool to generate the passwords: http://hackage.haskell.org/package/scat
With a password manager that randomly generates unique passwords, you don't have that problem, but you do have to synchronize the data.
(Nevermind that you can't change individual passwords or the master password at will with a deterministic scheme.)
the seed is just the seed, and will always be the seed. the master password can change and be supplemented by 2FA / other enhancement schemes.
I mean sure its not the most secure thing and yeah you have to remember which iteration you are on once the site has some sort of databreach or makes you change your password but it generally works pretty well.
"The device is a small USB gadget that effectively acts as a SSL proxy, allowing it to have direct and first access to the clear text data to and from the host. It has a small user interface so it can accept (and optionally store) specific user input (PINs, passwords, credit card no, etc) and display specific user/server output data fields. This allows it to insert either pre-stored or on-demand passwords etc. into the outgoing stream. It also allows specific confirmation data fields from the server to be displayed e.g. a beneficiary account no., balance amount, etc.
So the actual passwords are only available in the comms path and PC-side in SSL-encrypted form and any confirmation fields (beneficiary account no., balance amount) are confirmed to the user via the integrated display just before being SSL-encrypted, so they cannot be manipulated before going to the server.
In terms of inserting a password, it automatically detects the standard HTML password form field and the corresponding response message (used by all secure web servers), and substitutes a dummy password for a real one. I have confirmed that this function works on all the major web sites e.g. PayPal, Amazon, E-Bay, Google, LinkedIn, etc. and at least four banks that I have checked (probably most).
For displaying server confirmations, it might require a little help from the host server in terms of detecting the HTML display field, or else it would need to store a profile for each website (not ideal). However, it costs the host service very little to tweak a bit of HTML.
I agree with you whole-heartedly about the user being in the loop. That is what my aim was with this device – The gadget directly authenticates the server through normal SSL and displays the server validity directly to the user via the integrated display. In Internet banking the main threat is the beneficiary account number. As long as the user is satisfied that the account number displayed on the device is correct then it cannot be manipulated other than by breaking the SSL crypto. In this it is almost identical to the IBM ZTIC. The difference is that it also allows the user’s password to be sent securely to the bank and therefore does not require the use of a client cert (so no private key needed)."
Personally I'm more concerned about when I enter a master password that if there's a Trojan on my computer it's game over.
As most password managers are an encrypted database of sorts once it's encrypted all your passwords are out.
The only protection against that is some sort of 2FA.
If you have a trojan on your computer it'll still be able to decrypt your passwords while the token is plugged and unlocked of course, but at least offline attacks shouldn't be possible. Of course that doesn't mean you shouldn't use 2FA wherever possible to mitigate the risks further.
Some tokens can also be configured to wait for a physical button press to allow the decryption which which can help prevent a complete background decryption of your passwords (assuming that each password is stored with a different key, like with password-store for instance).
The only ways I can think that would be able to prevent this are:
1. Physical non-networked device password manager
2. Really creative usage of SELinux
In all honesty, I could see buying an Android at a Pay as You Go place, and repurpose it to be a offline password manager. Never connect it to wifi, BT, cell. And any APK's you need you load via microSD card and USB.
This is my attempt in progress:
Honest question: what does 'holistically' mean in this context?
Good question - it means the whole system (that is, both the client and server) were designed with the goal of limiting the exposure of the user's passwords in all ways we can - including time (not exposing them in the DOM and having the minimum possible exposure to the browser with acceptable user experience), code (minimizing the amount of code that has access to the user's credentials), and organizations (minimizing the trust the user has to put in any one provider).
The paper offers an interesting but extreme extension into one aspect of security that users control. That is great to offer an extreme solution for the paranoid. If implemented well, even the less paranoid could use it.
Sadly this solution still offers less security than systems integrated into SSO/OAuth and 2FA.
OAuth implementations very often have vulnerabilities. Just look at how many Facebook had!
Not to mention 1) it doesn’t scale like passwords and 2) all the privacy implications.
As for 2FA it’s in addition to primary authentication scheme, like a password, not instead.