For OTP, calling it out as "non-universal" seems strange: far more services support TOTP/HOTP than U2F at the moment, so for most users, even if they use U2F everywhere it's available, they'll still end up with a pile of TOTP codes on their phone.
The proposed solution is even more strange: it still recommends storing the 2nd token at home, but offers to solve registration of new services while abroad. No solution is provided for improving recovery beyond just using two U2F tokens.
> and possibly the invalidation of the lost key at first login
Do you mean that some service might disregard the counter value (the fact that Google and Github respect it doesn't mean everyone does the same), or something else?
> it still recommends storing the 2nd token at home
In the article I mention that I can either bury it somewhere in the forest or brick up into the wall. Even if we ignore the forest part, do you consider "bricking up into the wall" and just keeping in sleeping room as the same thing, even it's my home's wall? If you do, then ok, sorry for wasting your time, we're not going to agree. Because I believe that making the token really hard to access (like having to disassemble the actual wall a little bit) does make the token a lot more secure.
And another important point that with the regular token I have to add it to every new service manually. Which can be not trivial if I'm traveling far from home. And I can just forget, etc etc.
Personally, I have a couple U2F tokens and OTP on my phone, with recovery codes. If I register for a new site when I'm on the road, it gets my travel U2F and phone OTP. When I get home, I pair up the 2nd U2F and drop the recovery codes in the box with the rest of them.
As to having to take a sledgehammer: well, losing a token is an emergency situation and I just try hard not to lose it. But if I do, I still have a way out, and breaking a wall a little is still a lot better than losing the access to my accounts.
The cons listed on "Separate U2F token for backup" doesn't make any sense (except the last point, I'm looking at you, twitter). They assume that you have your primary u2f token on your keychain and backup token at home or in a safe. That's nonsense. You should have one nano u2f token in every laptop's usb port, then another one on your keychain (and probably another one at home or somewhere). And there's no primary/backup. Every u2f token should be treated the same.
The current state of USB ports makes it really important to be able to add multiple keys to an account. From my experience you need 1) USB A key that are used on most devices, 2) USB C keys that can be used on devices like MVP, 3) NFC keys to work from your Android device.
U2F does use public key crypto (ECDSA).
> because one can carry all the public keys and only a single private key.
Please clarify. Typical pub key crypto usage is that there is a 1:1 mapping private key to public key. Before I go on to smash your argument (j/k) you need to explain this part a bit better.
OPs problem is he needs his crypto keys at all times, because without them, he cannot sign up for a new service AND enroll them all. Now he's forced to carry his backup everywhere, making it just as prone to loss as his primary. Valid point.
Alternatively, if OP could carry a key ring of public keys for all his crypto keys, his backup crypto key can stay home, or in a safe deposit, or buried in a virgin forest of maple trees. While U2F may use ECDSA, or whatever other public key algorithm, it is not possible to carry just the public key and enroll it, is it?
The following is not 2FA, obviously: To illustrate the public key point more clearly, when I sign up for Github, I don't need all my private SSH keys on hand. I only need all my public keys to enroll them, and a single private key to use SSH. If at any time one of the crypto keys is stolen, I can remove the associated public key and use one of the backup crypto keys instead.
> While U2F may use ECDSA, or whatever other public key algorithm, it is not possible to carry just the public key and enroll it, is it?
Right, it is not. You have to prove possession of the corresponding private key at enrollment time. A good enhancement to U2F would be to allow cross-attestation. When I enroll, I also sign and send the pubkey (and handle) of other keys I want enrolled at the same time. This requires that enrollment keys not be generated randomly (I have to know which pubkey to cross-sign), but this requirement can be dealt with (for real-world practical uses) on the token side.
I don't know how effectively this addresses the problem at hand. How is it better than a like-keyed backup token whose use automatically invalidates the primary token?
So, what security problems come to your mind if we consider a duplicate token buried at 1 meter somewhere in the forest (and nobody really knows where it might be) ?
With PKI, I contact my spouse. Honey, can you revoke the key I'm carrying? It's been stolen. Thanks sweetie! See you tonight.
I'll admit the duplicate approach makes for a much better movie. The PKI solution is positively boring. Maybe we could throw in a spouse kidnapping to keep it interesting?
Your article makes a valid point about the catch 22 of U2F keys. I simply disagree with your conclusions that it is the user who should try harder to make U2F work. It seems like you are pointing out that U2F is fundamentally broken, but you haven't accepted that yet.
The author says:
"This way, the counter range of the primary token is [0, 2097151], while the counter range of the backup is [2000000000, 2002097151]. The fact that those ranges don't intersect ensures that once backup token is used on some service, the primary one is invalidated for good."
But... If my understanding is correct, the counter value is stored both on the server and the token. The server rejects authentication IF the value of counter provided is less than the value of counter stored server-side.
This means that every server (service) stores a different value of the counter.
Doesn't this mean that using the backup token only invalidates the main token for the single service for which the backup token it's being used?
The main (compromised) token will stay perfectly usable on all the services except for those where we manually disabled the main (compromised) token.
Am I correct in this?
Maybe service providers could provide an https endpoint to perform this?
It might be a good idea to give a distinct well-known name to this kind of endpoints, and provide such endpoints in the registration email so that when a token is to be invalidated, an user can regexp-search the well-known name of the endpoint in their mailbox, retrieve all such endpoints and invalidate the compromised token against all of them.
That's a good idea, with one caveat. It needs to be quite hard to revoke a token. Proof of 1-factor is not sufficient; an attacker could then simply revoke your token and then gain access.
So a lost token is quite a challenge, even if there is a revoke URL. This doesn't seem automatable (to revoke all "in a single time" as you say).
> provide such endpoints in the registration email
In addition, the browser should record U2F interactions and know all sites on which a token was registered. Then such a list is very easily accessible, as long as the browser log is available. If it isn't, well you have email as a backup log.
The fact that you can't unilaterally revoke all token registrations is the main weakness of U2F. (besides the cost and the user action required for uptake of it)
This post, in very short, proposes to flash two devices with the same key. I think this is great for hackers & makers, but not for consumers.
Moreover, it might be convenient on a superficial level, but I think it's fundamentally useless. If we consider the security model, the backup device can only be used if the primary device is damaged. If the primary is lost or stolen, the key should be invalidated, and thus the backup device is useless.
This means that you always need 2 distinct, active security keys anyways.
For consumers, I believe that the distinction between broken vs lost/stolen is very hard to explain. Think also to a company where you deploy security devices. An employee suddenly needs a backup device. Helpdesk should investigate if it was broken, or lost. It's simply a mess.
For a possible solution, I think there should be a protocol where a user can export the key, encrypted for a backup device.
At a high level: 1) insert the backup device and start the backup process, this export a public key, proof of possession of the corresponding private key, attestation certificate 2) insert the primary device, export the key: upon validating attestation and proof of possession, encrypts the key for the backup device and export the blob, 3) re-insert the backup device to receive the key, decrypt, store and use it. Since this moment on, the device should clearly look as a backup, for example the led could be red instead of blue (see, for example, how MetaMask displays imported keys).
I don't think the protocol is really important here, I think what's really important is that: 1) the user should be the one creating the backup device, if he's knowledgeable enough and understands the risks and uses, and 2) most importantly, there shouldn't be a way to flash a security device with untrusted code.
I'd be curious to hear what people think about this, would backup be useful to you? We're working on the new gen of u2fzero (https://solokeys.com), and this could be a nice feature to add if people feel it's needed.
I mean, yes the 2 devices can be initialized at different times, but otherwise it makes no matter.
Only during the moment of provisioning.
> In particular, if someone prebuilds the keys for you, he/she has you key.
You have a similar weakness. If someone prebuilds the keys, you don't know if they are loading a fixed key. You also have the problem of not knowing if the attestation cert is actually trustworthy, or where else the key for that cert might live.
> If the primary is lost or stolen, the key should be invalidated
> and thus the backup device is useless.
The article mentions multiple times that the backup is set up in such a way so that right after we use the backup token on some service, the primary token becomes immediately invalidated for this service. Read the article for the details on how it's implemented.
In the article I explain how to make it impossible to just "increment the counter" of the primary token, see this section: https://dmitryfrank.com/articles/backup_u2f_token#caveat_wit...
> In crypto, keys are secrets, the rest can’t be relied on for security.
First, there's no 100% security in the world: it's all about time and resources, and a proper security consists of many layers. The counter is one of them; after all, it does exist in U2F protocol for a reason (to prevent clones).
And if you've read an article, you'd notice that it mentions: the backup token should only be used to log into accounts, add a new key, and invalidate the old one. Of course it would be a bad idea to keep using backup token. To avoid waiting for the new pair of tokens to arrive, I keep them together with my backup: my backup consists of the backup token itself, and also a brand new pair of tokens (new primary and new backup), which aren't used anywhere yet. So if something terrible happens and I lose my primary token, I go all the way down to get the backup, use the backup token to login to each service, add a new primary token, and revoke the old one.
I see your point on the limited usage of the backup while you buy a new device. This goes back to my initial comment... you basically always need 2 active security devices, so I don't really see any big benefit in having a backup vs a secondary device.
Again, not saying it's impossible, but with the existing implementation, it takes considerable amount of time. I should be able to get the backup token faster.
> I don't really see any big benefit in having a backup vs a secondary device.
A big benefit for me is not having to add my backup token to every single service. It's both more convenient and more reliable (since I can't forget), and also more secure, because I can take my backup token and brick it into the wall. If this benefit is not a benefit for you, then, fine, we're not going to agree.
I think the time is a big advantage. As I mentioned, I'm working with Conor on the FIDO2 security key (actually, update ), and we were thinking to an option to create backups, maybe for advanced users. I'll keep you posted if we end up doing something in the space.
I understand the counter piece, but let's say someone steals your primary U2F, can't they just increment the counter to 1000002 and it would keep working even if you have used your backup token?
If they steal a primary yubikey token, no. The counter is stored and managed only in the secure element part of the device. If they steal a primary u2fzero token, which of course the proposal depends on, the counter is not protected in any meaningful way.
This doesn't matter in practice, however. No site is doing anything useful with the counter.
What do you mean? In the article I mentioned that at least Google and Github refuse to authenticate if the counter is less than the last seen value. So using backup token does invalidate the primary one.
> If they steal a primary yubikey token, no. The counter is stored and managed only in the secure element part of the device. If they steal a primary u2fzero token, which of course the proposal depends on, the counter is not protected in any meaningful way.
Where did you get the idea that the counter on u2f-zero is not protected in any meaningful way? The counter is maintained by the ATECC508A chip and is incremented on each authentication. And also see my adjacent comment about reliably preventing primary token from returning a counter which is as large as that of backup token.
In the u2fzero implementaiton, the counter is not used internally by the ATEC5508A in the signature generation. It's merely used as stable storage.
It's used much like unix advisory file locking. As long as you are not using it adversarially, it will work "correctly".
Once you attack the device, it's absolutely trivial to use any counter value you care to, not at all connected to the (yes, secure-enough) counter internally stored in the ATEC5508A.
Apologies about my incorrect statement about any site's usage of the counter. I was mistakenly thinking about the allowance of the counter to increase by any increment.
Still, this is a weakness of the U2F spec. In fact, there is no spec for counter usage on the RP (relying party) side, just an implementation consideration:
> Relying parties should implement their own remediation strategies if they suspect token cloning due to non-increasing counter values.
So you, the conscientious user, would need to verify with each site that they don't allow the counter to reset. Well, you would need to if the counter were implemented correctly with u2fzero.
Could you elaborate more on that? How exactly I could use any counter value?
Ok cool, I’ll update the article with that important detail.
But the primary token uses just `hardware_counter_value`, so its range is from `0` to `2097151`.
The important part is that the ranges of primary and backup tokens don't intersect.
Also let's make it not 1 000 000 but, say, 4 000 000 000, which still leaves plenty of values of a 32-bit value.
Also having 4 tokens does feel kinda too much to me. Each additional device does open a new attack vector, and if your token at home or in other places suddenly disappears, you're unlikely to even notice that quickly, while an attacker could use that time having a token.
Also, if the service doesn't support having multiple tokens (as mentioned in comments, Twitter is such an example), then having 4 tokens doesn't help much.
he fails to recognize that the yubikey has a secure element and is actually secure, whereas the u2fzero is garbage. (secure vs the host OS but otherwise trivially hacked).
he is worried about losing his phone — which is relatively secure — to the degree that he’d have to invalidate all the keys, yet for the less secure u2fzero has no such concern.
a better solution would be a cloud backed token. (still with caveats of course)
The source code is horrible so I'm not going to do a full analysis but that's the gist.
A recent generation phone is far, far more secure.
Where did you get that idea from? Preparing a device consists of the following steps:
- Flash temporary configuration firmware
- Send keys to the device, which are stored on ATECC508A
- Flash actual u2f-zero firmware
So on the second step (configuring the device), the host writes keys on the device, and those keys are stored on ATECC508A. It happens there: https://github.com/conorpp/u2f-zero/blob/master/firmware/src...
I do agree that the code is bad though, but alas. I did consider reimplementing it properly, also on a more powerful chip, but I don't think I'll be able to find time for that.
Anyway, dirty code doesn't mean that the device is insecure. From what I see, among other things, ATECC508A is used properly.
You have completely misread/misanalyzed it. It's used only as a "hardware library" to avoid implementing ECDSA. Or perhaps you don't understand U2F well enough?
I can't understand why you are downvoted. It's a valid (albeit wrong) contributing comment.
Most critically for u2fzero, the MCU sees the enrollment key. On enrollment, a new key is derived, and the nonce used in derivation is sent to the site as the key handle. On use, the key is re-derived from the returned nonce, given to the MCU, then loaded into the ATECC508A, then the "secure counter" is read and a signature generated. I haven't looked in detail at RMASK vs WMASK but it smells like they are not useful.
In the yubikey, the entirety of this code is implemented inside the secure element. In the u2fzero, except for the actual signature itself, the important parts of this code are executed outside of the secure area. The security of the key derivation is good because it prevents one-time access to the device from compromising future enrollments, but for the use case you propose (lost token) it isn't secure since the strongly-generated already-enrolled key is revealed.
A simplistic correction would be to limit the privacy preserving aspect of U2F to 16 sites (the key slot limit on the ATECC508A). This is a fine compromise since there are only 11 sites that accept U2F, per https://www.dongleauth.info/dongles/ . On the 17th reg, u2fzero can refuse to register, or it can cycle back to the 1st key, or it can keep reusing the 16th key, or it can regenerate (and thus unenroll) a key, or other choice.
Rather than a wrapped key, or nonce, or other data for key regeneration, the handle returned to the site is just a key slot index. In this way, the key is generated by and never leaves the ATECC508A.
So, if you're following along, you now realize that this defeats your backup methodology, since the keys would not be wrapped and passed back and forth between site and token. You could pre-generate all the 16 keys outside of the device and program 2 of them identically.
Or you could use the ATECC508A internal DeriveKey command. This can use just the appid as nonce and doesn't need an actual random nonce. This doesn't limit the number of sites that can be registered and also allows duplicates (by programming the same master key into all equivalent devices, along with a starting counter value). The handle in this case can just be a static token-id. If you wanted multiple users to actually register different keys using the same token, then you should add a random nonce. This prevents a site from identifying 2 users sharing a token, and learning something about their relationship. Or a site from identifying 1 user with multiple accounts. But the nonce isn't a security aspect per se, just privacy.
Regardless, your desired use case cannot be met "securely" with the ATECC508A, because the counter is not secure. Anyone with access to the lost primary token can just set whatever counter they like. This is mitigated by the token being a 2nd factor. I would further mitigate it by burning the JTAG fuse, and potting the ATECC508A. Then the MCU can't be reflashed or read out, and the ATECC508A can't simply be popped off trivially and put on a new custom board. If you mix a partial static data from the MCU, along with random nonce, as part of key derivation, then the MCU and ATECC508A pair are married and you have to depot both of them and now the attack is hard enough (beyond amateur hour) that you should have plenty of time to enroll a new token.
Let me just also laugh at the u2fzero site: "It is implemented securely."
Second, it looks like you're assuming that it's easy to just pick the MCU and read the code programmed into it. From my past embedded experience I know that e.g. MCUs by Microchip have read-protection bits in their config, so that if it's programmed with those bits set, one can't just read out the hex. Not saying it's totally impossible, but it takes considerable amount of time. This is the part of datasheet for e.g. PIC32:
> 27.2 Device Code Protection
> The PIC32MX Family features a single device code protection bit, CP that when programmed = 0, protects boot Flash and program Flash from being read or modified by an external programming device. When code protection is enabled, only the Device ID registers is available to be read by an external programmer. Boot Flash and program Flash memory are not protected from self-programming during program execution when code protection is enabled.
HOWEVER, that said, after briefly looking at the u2f-zero's MCU datasheet (https://www.silabs.com/documents/public/data-sheets/efm8ub1-...), I failed to find any mention of read-protection, which does seem strange to me. Need to figure out more before I can say for sure. But nevertheless: suppose I get some time and reimplement something like u2f-zero but on the MCU which does have code protection, e.g. PIC32. Does it address your concerns?
First you argue that the device is secure. When I point out that it is not, you switch tack to argue that when you take other factors into account, it's still "better" than doing it some other way because of some other factor, so overall security is good.
You also want to conveniently forget aspects of your original argument at your blog, such as the defect with OTP that a phone can be stolen. A phone is much, much, much more secure than u2fzero.
You're also insisting that your particular use case for needing a backup is the only and best use case. If you want to design a system for your individual desire and needs, that's great, but this doesn't generalize. I'm guessing most people will not want their device baseline to be insecure so that they can have a 2nd insecure backup at hand.
As I hinted, the best solution is likely a cloud based token. This can be simulated with u2fzero by using a method similar to what @ecesena proposed. With either knowledge of an authorized external transport key, or by sharing a transport key, the atecc508a can share key material securely. I won't work out the details here, but basically you just need a single initialization-time secret that you store in the cloud. With that, you can take an arbitrary fresh unprogrammed u2fzero and initialize it to look like the one you lost. It has to be done this way (at user's direct control) because if you buy u2fzero's pre-programmed all security is lost.
The secret blob can be stored on your [secure] phone and backed up in a safe on a piece of paper, or any other storage scheme which a particular user may require to meet their needs. Personally, I would store it in icloud.
or if you used publickey, closer to @ecesena's proposal, you don't need to keep a secret. (at the expense of having to prepare the backup devices ahead of time.)
In fact I really appreciate you explaining all these details, and to be honest I need to find a decent amount of time to fully wrap my head around it; hopefully on the weekend I'll be able to. I doubt it will change my reasoning about my personal use case, but learning something new about security is never bad. Thanks for that! Hopefully I'll get back to you after learning more.
Also wanted to make 100% sure: are you arguing just about the u2f-zero being insecurely implemented, or the whole concept of backup with the cloned and securely-stored token also being bad? If we imagine that e.g. yubico starts producing matched pairs of tokens (I doubt it would, but still let's imagine), primary and backup with the boosted counter, which are implemented securely enough etc - would it be still bad in your opinion?
> are you arguing just about the u2f-zero being insecurely implemented,
yes. the concept of a pre-cloned token is a good one. my main point of contention with you is that in your threat model, a lost phone is a security problem, yet a lost u2f is not. you need to revise your threat model or revise your solution.
I realized that I could explain my concerns better in the article. My biggest issue with the phone is not that it could get hacked, but that I could just lose it together with my u2f token (because, you know, I always carry both phone and u2f token), and thus get myself locked out of my accounts. So it's not about somebody attacking me specifically to get my 2fa data, but just about some bad luck happening which results in losing both u2f token and a phone which was a backup for u2f token.
Instead of Google Authenticator we could use Authy, which synchronizes its database with the server, but Authy account could be recovered with SMS, which is anything but secure. I actually updated the article just now with that point.
I really want having a backup which is rock-solid secure and reliable, you know, more reliable than any other 2nd factor I have. So having a token bricked into the wall or something like that would work.
> you need to revise your threat model or revise your solution
When I have a chance to reimplement the same backup concept having something more secure than u2f-zero, then yeah I surely will revise the solution.
in short, MCUs are not secure devices. the additional protection is great for surface level protection, and certainly sufficient for most threat models. but in this case, we are talking about an authentication token that is obligated to keep secrets. one cannot claim "secure implementation" without secure key storage.
even the atecc508a doesn't really meet the definition of secure. eg, there is no eal certification. no reputable vendor would use it in a commercial security device. it's great for its target market though.
unfortunately, there is no way for a hobbyist to acquire actually secure chips. something like the atecc508a will have to do.
I doubt there is 100% secure device, you know, it's all about tradeoffs. Given enough time and resources, nearly anything can be hacked.
By the way, if an attacker has gained physical access to the primary token (and also to the another factor, a password, since token on its own is not too helpful), they don't need to hack it in any way: they can just use it to log into the account, add some other token and revoke the existing one.
you might want to read up on EAL certification. Yes, there is no 100% secure device, but "secure" is about resistance to attack and an actual secure element is very resistant, as well as tamper evident.
u2f-zero could be hacked in 10s flat if you prep ahead of time.
> if an attacker has gained physical access to the primary token ... they don't need to hack it in any way
but they need long term access. with this device, short term access is enough to learn the key and then i can get access at a time of my choosing. i can also learn the counter value and you won't notice that i have gained access.
I'm not sure of any methods to bypass the read protection on normal MCUs in a 10s "drive by" attack. AFAIK, the special companies that provide flash readout (http://www.break-ic.com/), do so by decapping the chip and using involved imaging techniques. I suspect they get good at identifying various flash technologies, many of which are common to many chips. But don't think it's feasible for a drive by.
The I2C eavesdropping shouldn't be an issue because the ATECC508A does apply a mask.
If I lose my primary token, how can I not notice that? I look at it a few times per day, and I use it not rarely either, so I can't see how I can not notice if I lose it (even if it was replaced by the similarly-looking device).
RMASK and WMASK, which smelled not useful to you, are there exactly to prevent this from happening.
the key managed here
just means that the "actual key" (unmasked) only lives in MCU memory for a short time -- the time from when the mask is applied and then until return to caller and memory is cleared, in the enrollment case I linked. In the authenticate case, it lives quite a bit longer because the stack space used for key storage isn't zero'd.
The atecc508 doesn't use or know how to use the mask. The actual key used for the encryption is passed in the clear over i2c.
(note that the key derivation you suggest is wrong because of the extra xor masking.)