We've already gone through a number of solutions, and regardless of the tech you built, this is just another example of an app-approval flow. Twitter & Facebook allow you to approve logins from already approved apps, and while their underlying tech is different from yours — your server can still change the tech maliciously.
For a brilliant app idea, see Kryptonite. It lets you install a browser add-on that pretends to be a U2F key, but the private keys are stored in your phone's secure key storage (Keystore for Android & Secure Enclave for iOS). The experience is similar to what I assume your approval from a phone would be, but it works on all website that support U2F.
For your own site, I'd recommend using U2F tokens and TOTP + OTP keys, they're considered really good solutions, and while SMS is considered bad — it's still better than no 2FA. Please see the post by Troy Hunt on 2FA from late last year. It covers more than I possibly could.
Sorry for the negative critique, but I'm sure you'll build something great next time ;) Instagram was first launched as a Foursquare clone
Thankfully Krypton supports the new protocol too :)
The trick WebAuthn/U2F do is equivalent to if somehow, magically, the device could have an effectively infinite number of private keys, not just one, or one per site, but one for every enrollment on a site. If Alice and Bob share the same FIDO key for, say, Facebook, the only way even Facebook itself could discover this is to ask Alice's key to authenticate as Bob which will work. Every attempt at such a guess requires a human interaction (button press) and risks discovery. So probably no one will ever try.
Ledger Nano S (a hw crypto wallet) has a virtually infinite number of public/private key pairs. In fact, all hardware wallets that implement "hierarchical deterministic wallets" have them. All the wallet needs to determine a private key for one of its public keys is the "derivation path" of that pair. Even more interesting, if one site knows one of your public keys, it can't determine the rest of them without knowing your "master public key".
Upside is the convenience of not needing the original account passphrase to add new devices. Depending on your threat model, demanding a passphrase to add a new device might not add anything to the security.
Downside (I imagine) is that people can misplace their passphrases for a long time, thereby allowing the account to grow in value, but not realize it until they've lost or destroyed their last authenticated device. Then they're screwed because they no longer have any way to get back in.
What does reduces security is the use of SMS as a password reset mechanism, or any similar method that uses SMS as the only factor for authentication. Don't do that.
I get that security is a bunch of trade-offs, but this seems post seems to present it as a fool-proof method, since the goal is to prevent the server from being able to see the private key ("how can the user's private key be transmitted in a way that doesn't reveal their credentials to the server"), and the given solution doesn't actually fulfil this goal.
One way to do this is to have the new device generate a random passphrase, display it on the screen and require it to be typed into the already authenticated device. Then the devices can use PAKE with that passphrase to establish a secure channel between each other. Even if the data still goes through the server, it's encrypted and the server can't read it.
Another method is to have the new device display its public key as a QR code and have the existing device scan it.
"We only handle keys clientside" is irrelevant when the server can ship code, at any time, that delivers keys, passphrases, or plaintext anywhere they like.
• On new device, make a key pair.
• On new device, log in to server and send new device's public key to server.
• Server sends approval request to old device, which includes the new device's public key.
• On old device, receive the approval request and the new device's public key. Decide to approve new device.
• On old device, take old device's _private_ key, encrypt with new device's public key, and send encrypted blob to server.
• Server relays approval message to new device.
• New device decrypts blob, and deletes its original key-pair (that was generated at the start). New device now uses old device's private key. The public key is derived from the private key.
So in the end, old device and new device have the same key pair.
Please let me know if I got that wrong!
Assuming I got it right, this strikes me as problematic, because it means that all the devices are sharing a private key. If any one device were compromised, then all past messages would be exposed; if the compromise were to go undetected, then future messages would also be compromised.
“Linking multiple devices” via their public key is still fine, since the user has to login once with email/password (or by some other means).
If you were to have the requirement of needing multiple private keys to decrypt a message, that would be a different thing - maybe to ensure in a group chat only the current group chat members can decrypt the messages. And whenever a member leaves/joined a new group secret gets generated based on all private keys of all members. But then what would be the benefit of this ?
I personally think there is potentially a big security usability win by using a securely (key part here) synced private key (via iCloud Keychain for example).
That said, I like the apple approach better here.
No forward secrecy?
Passing private keys all over the place?
Look at how apple does it. Private keys stay on device in secure enclave, even user cannot export from there. I don't even think system root (which user and software generally don't have access to) can export private keys.
Adding a new device to the account involves old device generating a funny looking wave pattern which new device scans. This has always worked fine for me. I assume apple then adds public key of new device to my acct.
Your server can then impersonate a "new device", generate an ephemeral key pair, send the public key to all the user's authenticated device and perhaps trick the user into encrypting their private key to it. One way to prevent this is to show some sort of fingerprint on both the new device and the authenticated device proving that the public key is the correct one.
2) Users, even ones who know what fingerprints are and care, do not validate fingerprints. The rest of the users (the other 99.9%) don't either. This is not a good UX.
You could make validation part of the workflow (before you can even "say yes", you need to scan a QR code for example). You might say this is also not good UX and I wouldn't disagree. Cryptography + good UX is a hard problem.
I don't really see how it's very different from an external security token that does basically the same thing. (Though an external security device can make it very difficult for something malicious to get at the private key, which is good.)
Secondly, if I can successfully get you to approve my login request then I not only get logged in but also receive a copy of your private key? That is a really bad failure mode.
(Though I'm not convinced the proposal in the article is all that secure - as others have pointed out, it seems trivial for an evil server to obtain the private key..)