Hacker News new | comments | show | ask | jobs | submit login
Reliable, Secure and Universal Backup for U2F Token (dmitryfrank.com)
46 points by dimonomid 4 months ago | hide | past | web | favorite | 73 comments



The cons about 2nd-U2F device seem rather odd. Several boil down to "houses aren't secure", but the primary attack vector for the vast majority of users is not physical invasion or destruction of their home. Similar concerns are raised for recovery codes.

For OTP, calling it out as "non-universal" seems strange: far more services support TOTP/HOTP than U2F at the moment, so for most users, even if they use U2F everywhere it's available, they'll still end up with a pile of TOTP codes on their phone.

The proposed solution is even more strange: it still recommends storing the 2nd token at home, but offers to solve registration of new services while abroad. No solution is provided for improving recovery beyond just using two U2F tokens.


Yes, the vast majority of users just worry about losing or breaking their U2F token. Such a user just wants to be able to log in and replace credentials. The proposed solution, which I find surprisingly elegant, in comparison to regular two tokens offers ease of use: avoiding registering a second token everywhere (and possibly the invalidation of the lost key at first login). Compared to the usual TOTP fallback, it keeps the phishing protection.


Thanks. What do you mean by "possibly" though?

> and possibly the invalidation of the lost key at first login

Do you mean that some service might disregard the counter value (the fact that Google and Github respect it doesn't mean everyone does the same), or something else?


Yes :) Personally, I would just start replacing credentials upon loss in descending order of importance.


Yeah sure, I mentioned in the article that the purpose of the backup is to enroll a new token and revoke the old one. It would be a bad idea to keep using backup for a long time anyway.


I don't know what makes you miss the point, but ok, let me repeat. With the regular second U2F device it has to be easily accessible (e.g. at home in my sleeping room). With the proposed solution, it doesn't have to be easily accessible.

> it still recommends storing the 2nd token at home

In the article I mention that I can either bury it somewhere in the forest or brick up into the wall. Even if we ignore the forest part, do you consider "bricking up into the wall" and just keeping in sleeping room as the same thing, even it's my home's wall? If you do, then ok, sorry for wasting your time, we're not going to agree. Because I believe that making the token really hard to access (like having to disassemble the actual wall a little bit) does make the token a lot more secure.

And another important point that with the regular token I have to add it to every new service manually. Which can be not trivial if I'm traveling far from home. And I can just forget, etc etc.


If someone is in my home looking for my U2F key, they've bridged the gap between the threat model I'm concerned with and the model where they're clearly motivated enough that putting the key in brick isn't going to stop them. Furthermore, I don't really like the prospect of having to take a sledgehammer to my wall when I drop my U2F key during a trip.

Personally, I have a couple U2F tokens and OTP on my phone, with recovery codes. If I register for a new site when I'm on the road, it gets my travel U2F and phone OTP. When I get home, I pair up the 2nd U2F and drop the recovery codes in the box with the rest of them.


First, if the token is easily accessible, then they can steal it even if they don't actually look for it. Like, steal "accidentally". Second, obscurity here is an important part of the security: you don't have to tell anyone that you have a token in the wall, so even if they came specifically for the token, they can surely look for it in your home (because the majority of people store backup token just at home), but it's a lot less likely that they start break walls (unless they specifically know it's there). So here, the level of security is up to the token owner.

As to having to take a sledgehammer: well, losing a token is an emergency situation and I just try hard not to lose it. But if I do, I still have a way out, and breaking a wall a little is still a lot better than losing the access to my accounts.


Wat.

The cons listed on "Separate U2F token for backup" doesn't make any sense (except the last point, I'm looking at you, twitter). They assume that you have your primary u2f token on your keychain and backup token at home or in a safe. That's nonsense. You should have one nano u2f token in every laptop's usb port, then another one on your keychain (and probably another one at home or somewhere). And there's no primary/backup. Every u2f token should be treated the same.


Should? That does mean: purchasing a bunch of tokens, each of which could be lost, and registering them all. I don't think online security has normative/prescriptive rules, just tradeoffs :)


Notably, Twitter doesn't support adding multiple U2F keys.

The current state of USB ports makes it really important to be able to add multiple keys to an account. From my experience you need 1) USB A key that are used on most devices, 2) USB C keys that can be used on devices like MVP, 3) NFC keys to work from your Android device.


Oh, thanks for the note about Twitter! I'll update the article.


I don't believe the Yubikey is the problem as much as the U2F spec. Public key crypto solves this nicely, because one can carry all the public keys and only a single private key. When I think of backup keys, I think multiple keys, not duplicate keys. Having a duplicate of the U2F key is a bigger problem for security than those outlined here. Being unable to duplicate the Yubikey is a feature, not a bug.


> Public key crypto solves this nicely

U2F does use public key crypto (ECDSA).

> because one can carry all the public keys and only a single private key.

Please clarify. Typical pub key crypto usage is that there is a 1:1 mapping private key to public key. Before I go on to smash your argument (j/k) you need to explain this part a bit better.


>Please clarify.

OPs problem is he needs his crypto keys at all times, because without them, he cannot sign up for a new service AND enroll them all. Now he's forced to carry his backup everywhere, making it just as prone to loss as his primary. Valid point.

Alternatively, if OP could carry a key ring of public keys for all his crypto keys, his backup crypto key can stay home, or in a safe deposit, or buried in a virgin forest of maple trees. While U2F may use ECDSA, or whatever other public key algorithm, it is not possible to carry just the public key and enroll it, is it?

The following is not 2FA, obviously: To illustrate the public key point more clearly, when I sign up for Github, I don't need all my private SSH keys on hand. I only need all my public keys to enroll them, and a single private key to use SSH. If at any time one of the crypto keys is stolen, I can remove the associated public key and use one of the backup crypto keys instead.


> Alternatively, if OP could carry a key ring of public keys for all his crypto keys, his backup crypto key can stay home, or in a safe deposit, or buried in a virgin forest of maple trees

Ah.

> While U2F may use ECDSA, or whatever other public key algorithm, it is not possible to carry just the public key and enroll it, is it?

Right, it is not. You have to prove possession of the corresponding private key at enrollment time. A good enhancement to U2F would be to allow cross-attestation. When I enroll, I also sign and send the pubkey (and handle) of other keys I want enrolled at the same time. This requires that enrollment keys not be generated randomly (I have to know which pubkey to cross-sign), but this requirement can be dealt with (for real-world practical uses) on the token side.

I don't know how effectively this addresses the problem at hand. How is it better than a like-keyed backup token whose use automatically invalidates the primary token?


> Having a duplicate of the U2F key is a bigger problem for security than those outlined here.

So, what security problems come to your mind if we consider a duplicate token buried at 1 meter somewhere in the forest (and nobody really knows where it might be) ?


My key is stolen. I have to revoke it, but I need my duplicate to log in to do it. The attacker stole my key and phone, because they were in the same place. I've only got a few minutes to go out to the remote forest and dig a meter down to find my key before the attacker uses it with the phone back at bad guy headquarters to launch the nukes. Did I bury it next to this tree or that one? Damn it, they all look the same now. Dig fast, but careful not to smash the key with the shovel. Gosh, the water table is higher than when I buried it. I hope it still works. Christ, it's all tangled in roots! Pull!! Got it! Now log in, revoke it, reflash the key, re-enroll. Phew! Humanity is saved.

With PKI, I contact my spouse. Honey, can you revoke the key I'm carrying? It's been stolen. Thanks sweetie! See you tonight.

I'll admit the duplicate approach makes for a much better movie. The PKI solution is positively boring. Maybe we could throw in a spouse kidnapping to keep it interesting?


Ah ok, I see what you mean. So can we use the PKI solution today to use as a second factor for, say, Google?


FireFox based browsers support pkcs11 for smart cards like Yubikey. The right way can be built today. Popular services never built it. Maybe when security keys are more popular, they will. I believe trying to make the wrong way work at any cost will only entrench the wrong way. I'd rather point out that there is a better way in hopes that others will adopt it.

Your article makes a valid point about the catch 22 of U2F keys. I simply disagree with your conclusions that it is the user who should try harder to make U2F work. It seems like you are pointing out that U2F is fundamentally broken, but you haven't accepted that yet.


I'm no u2f expert (actually this article gave most of my knowledge about u2f) but imho there's a small security problem?

The author says:

"This way, the counter range of the primary token is [0, 2097151], while the counter range of the backup is [2000000000, 2002097151]. The fact that those ranges don't intersect ensures that once backup token is used on some service, the primary one is invalidated for good."

But... If my understanding is correct, the counter value is stored both on the server and the token. The server rejects authentication IF the value of counter provided is less than the value of counter stored server-side.

This means that every server (service) stores a different value of the counter.

Doesn't this mean that using the backup token only invalidates the main token for the single service for which the backup token it's being used?

The main (compromised) token will stay perfectly usable on all the services except for those where we manually disabled the main (compromised) token.

Am I correct in this?


There should be a way to invalidate a token on all services in a single time.

Maybe service providers could provide an https endpoint to perform this?

It might be a good idea to give a distinct well-known name to this kind of endpoints, and provide such endpoints in the registration email so that when a token is to be invalidated, an user can regexp-search the well-known name of the endpoint in their mailbox, retrieve all such endpoints and invalidate the compromised token against all of them.


> Maybe service providers could provide an https endpoint to perform this?

That's a good idea, with one caveat. It needs to be quite hard to revoke a token. Proof of 1-factor is not sufficient; an attacker could then simply revoke your token and then gain access.

So a lost token is quite a challenge, even if there is a revoke URL. This doesn't seem automatable (to revoke all "in a single time" as you say).

> provide such endpoints in the registration email

In addition, the browser should record U2F interactions and know all sites on which a token was registered. Then such a list is very easily accessible, as long as the browser log is available. If it isn't, well you have email as a backup log.

The fact that you can't unilaterally revoke all token registrations is the main weakness of U2F. (besides the cost and the user action required for uptake of it)


Yes sure, once the primary token is lost, the backup should be used to login into all of the services we used the primary one for. But it's not worse than if we had a regular U2F token as a backup: we'd have, again, to login to all services and revoke the primary token manually.


Yes I agree. It's not really a security problem. My bad, should have worded it better.


Incidentally I was talking to Conor about this problem yesterday.

This post, in very short, proposes to flash two devices with the same key. I think this is great for hackers & makers, but not for consumers.

Moreover, it might be convenient on a superficial level, but I think it's fundamentally useless. If we consider the security model, the backup device can only be used if the primary device is damaged. If the primary is lost or stolen, the key should be invalidated, and thus the backup device is useless.

This means that you always need 2 distinct, active security keys anyways.


Assuming there's a real case where you want a backup, i.e. you fear physical damage, but don't fear lost/stolen.

For consumers, I believe that the distinction between broken vs lost/stolen is very hard to explain. Think also to a company where you deploy security devices. An employee suddenly needs a backup device. Helpdesk should investigate if it was broken, or lost. It's simply a mess.

For a possible solution, I think there should be a protocol where a user can export the key, encrypted for a backup device.

At a high level: 1) insert the backup device and start the backup process, this export a public key, proof of possession of the corresponding private key, attestation certificate 2) insert the primary device, export the key: upon validating attestation and proof of possession, encrypts the key for the backup device and export the blob, 3) re-insert the backup device to receive the key, decrypt, store and use it. Since this moment on, the device should clearly look as a backup, for example the led could be red instead of blue (see, for example, how MetaMask displays imported keys).

I don't think the protocol is really important here, I think what's really important is that: 1) the user should be the one creating the backup device, if he's knowledgeable enough and understands the risks and uses, and 2) most importantly, there shouldn't be a way to flash a security device with untrusted code.

I'd be curious to hear what people think about this, would backup be useful to you? We're working on the new gen of u2fzero (https://solokeys.com), and this could be a nice feature to add if people feel it's needed.


There's no advantage to your method vs programming the same wrapping key into both devices. As far as the site is concerned, these are the same device but with different counter streams.

I mean, yes the 2 devices can be initialized at different times, but otherwise it makes no matter.


In my method no one knows the key. In the article, you have the key in an env variable. In particular, if someone prebuilds the keys for you, he/she has you key.


> In the article, you have the key in an env variable.

Only during the moment of provisioning.

> In particular, if someone prebuilds the keys for you, he/she has you key.

You have a similar weakness. If someone prebuilds the keys, you don't know if they are loading a fixed key. You also have the problem of not knowing if the attestation cert is actually trustworthy, or where else the key for that cert might live.


I'm not sure if you've read the article.

> If the primary is lost or stolen, the key should be invalidated

Exactly.

> and thus the backup device is useless.

The article mentions multiple times that the backup is set up in such a way so that right after we use the backup token on some service, the primary token becomes immediately invalidated for this service. Read the article for the details on how it's implemented.


An attacker in possession of the primary device can increment the counter, making the primary still working, and invalidating the backup. In crypto, keys are secrets, the rest can’t be relied on for security.

Edit: grammar


> An attacker in possession of the primary device can increment the counter, making the primary still working

In the article I explain how to make it impossible to just "increment the counter" of the primary token, see this section: https://dmitryfrank.com/articles/backup_u2f_token#caveat_wit...

> In crypto, keys are secrets, the rest can’t be relied on for security.

First, there's no 100% security in the world: it's all about time and resources, and a proper security consists of many layers. The counter is one of them; after all, it does exist in U2F protocol for a reason (to prevent clones).

And if you've read an article, you'd notice that it mentions: the backup token should only be used to log into accounts, add a new key, and invalidate the old one. Of course it would be a bad idea to keep using backup token. To avoid waiting for the new pair of tokens to arrive, I keep them together with my backup: my backup consists of the backup token itself, and also a brand new pair of tokens (new primary and new backup), which aren't used anywhere yet. So if something terrible happens and I lose my primary token, I go all the way down to get the backup, use the backup token to login to each service, add a new primary token, and revoke the old one.


I don't think we understand each other. If a device gets in the hands of the attacker, I won't assume he/she's going to use your code and obey to your restrictions. The primary device contains the key, that the attacker can possibly extract and use, setting any arbitrary counter he/she wants.

I see your point on the limited usage of the backup while you buy a new device. This goes back to my initial comment... you basically always need 2 active security devices, so I don't really see any big benefit in having a backup vs a secondary device.


> The primary device contains the key, that the attacker can possibly extract and use, setting any arbitrary counter he/she wants.

Again, not saying it's impossible, but with the existing implementation, it takes considerable amount of time. I should be able to get the backup token faster.

> I don't really see any big benefit in having a backup vs a secondary device.

A big benefit for me is not having to add my backup token to every single service. It's both more convenient and more reliable (since I can't forget), and also more secure, because I can take my backup token and brick it into the wall. If this benefit is not a benefit for you, then, fine, we're not going to agree.


Got it, this makes sense to me. And sorry, I was rereading the thread, I don't want to think I'm dismissing your whole article, I really appreciate you writing about it.

I think the time is a big advantage. As I mentioned, I'm working with Conor on the FIDO2 security key (actually, update [1]), and we were thinking to an option to create backups, maybe for advanced users. I'll keep you posted if we end up doing something in the space.

[1] https://news.ycombinator.com/item?id=17777224


Thanks! Yeah Conor mentioned that to me; I think the real benefit (at least personally for me) would be not being able to buy pre-made matching pairs of tokens, but being able to easily write my own key material plus the counter boost value. You know, yubikeys do have this functionality for OTP: their utility allows to program OTP keys. I actually expected the same for u2f, but alas.


So basically U2F clones. It would be neat to have some sort of CA between U2F devices, device A could sign device B and be put away, and device B could store the signing and share that with sites. This would allow device A to revoke device B completely, but would require a change of u2f support.

I understand the counter piece, but let's say someone steals your primary U2F, can't they just increment the counter to 1000002 and it would keep working even if you have used your backup token?


> let's say someone steals your primary U2F, can't they just increment the counter to 1000002

If they steal a primary yubikey token, no. The counter is stored and managed only in the secure element part of the device. If they steal a primary u2fzero token, which of course the proposal depends on, the counter is not protected in any meaningful way.

This doesn't matter in practice, however. No site is doing anything useful with the counter.


> No site is doing anything useful with the counter.

What do you mean? In the article I mentioned that at least Google and Github refuse to authenticate if the counter is less than the last seen value. So using backup token does invalidate the primary one.

> If they steal a primary yubikey token, no. The counter is stored and managed only in the secure element part of the device. If they steal a primary u2fzero token, which of course the proposal depends on, the counter is not protected in any meaningful way.

Where did you get the idea that the counter on u2f-zero is not protected in any meaningful way? The counter is maintained by the ATECC508A chip and is incremented on each authentication. And also see my adjacent comment about reliably preventing primary token from returning a counter which is as large as that of backup token.


> The counter is maintained by the ATECC508A chip

In the u2fzero implementaiton, the counter is not used internally by the ATEC5508A in the signature generation. It's merely used as stable storage.

It's used much like unix advisory file locking. As long as you are not using it adversarially, it will work "correctly".

Once you attack the device, it's absolutely trivial to use any counter value you care to, not at all connected to the (yes, secure-enough) counter internally stored in the ATEC5508A.

Apologies about my incorrect statement about any site's usage of the counter. I was mistakenly thinking about the allowance of the counter to increase by any increment.

Still, this is a weakness of the U2F spec. In fact, there is no spec for counter usage on the RP (relying party) side, just an implementation consideration:

> Relying parties should implement their own remediation strategies if they suspect token cloning due to non-increasing counter values.

So you, the conscientious user, would need to verify with each site that they don't allow the counter to reset. Well, you would need to if the counter were implemented correctly with u2fzero.


> Once you attack the device, it's absolutely trivial to use any counter value you care to, not at all connected to the (yes, secure-enough) counter internally stored in the ATEC5508A.

Could you elaborate more on that? How exactly I could use any counter value?


I just realized that there IS a reliable solution to this issue with the counter. Accordingly to ATECC508A datasheet, its counters can count only up to 2097151, but the whole range of U2F counter is 0xffffffff (which is more than 4 000 000 000). So, the counter boost should be set to the value larger than 2097151, and then, the primary token would never be able to return counter which is that large. So once backup token is used, the primary one is invalidated for good.

Ok cool, I’ll update the article with that important detail.


Wouldn't that also invalidate your backup token after first use?


No, of course not. In backup, we basically use this as a counter: `hardware_counter_value + 2000000000`. We don't care that `hardware_counter_value` cannot be larger than `2097151`; the value we use for calculations is 32-bit, so effectively, for the backup token, the values will start at `2000000000` and the maximum possible value will be `2002097151`.

But the primary token uses just `hardware_counter_value`, so its range is from `0` to `2097151`.

The important part is that the ranges of primary and backup tokens don't intersect.


There's no easy way to increment the counter, so one would have to invent some automation for sending authentication challenges to the token and pressing the physical button every time. The time it'd take should be enough for me to get another pair or tokens, enroll it to my accounts and revoke the old one.

Also let's make it not 1 000 000 but, say, 4 000 000 000, which still leaves plenty of values of a 32-bit value.


Are there really no commercial products that support this? I'd be keen to buy one that did, current fallbacks seem fairly crazy by comparison.


I am considering adding this to my European distribution of U2F Zero, but the problem here is that as the vendor I then know your secret key. As mentioned in another comment, the uncloneability of Yubikeys is a feature (similar to Google Authenticator vs Authy).


What do you think about a protocol like: https://news.ycombinator.com/item?id=17770803


Not that I'm aware of. I wrote to Yubico and surepassid, both said it's not possible.


I don't see the point. I have 4 U2F tokens, 1 I keep on my person, 1 I keep at home, and 2 in other safe places. The only downside is if I lose I token I then have to log into all my online accounts which support U2F to update my list of tokens.


Ok cool, so whenever you register on a new service, you have to add all 4 of them, which means going to "other safe places", picking tokens, adding them, and placing them back. Also I doubt your safe places are as safe as bricking them into the wall. Apparently it's okay for you, so, keep it up then. I personally hate this.

Also having 4 tokens does feel kinda too much to me. Each additional device does open a new attack vector, and if your token at home or in other places suddenly disappears, you're unlikely to even notice that quickly, while an attacker could use that time having a token.

Also, if the service doesn't support having multiple tokens (as mentioned in comments, Twitter is such an example), then having 4 tokens doesn't help much.


nice and obvious after reading (a good quality).

he fails to recognize that the yubikey has a secure element and is actually secure, whereas the u2fzero is garbage. (secure vs the host OS but otherwise trivially hacked).

he is worried about losing his phone — which is relatively secure — to the degree that he’d have to invalidate all the keys, yet for the less secure u2fzero has no such concern.

a better solution would be a cloud backed token. (still with caveats of course)


In which way u2fzero can be trivially hacked?


The ATECC508A is not used correctly. It is used as a RNG and for it's crypto functions, but not for key storage. So any site's key is trivially extracted with physical possession of the device. The wrapping key is also extractable, therefore you only need one-time offline possession of the device.

The source code is horrible so I'm not going to do a full analysis but that's the gist.

A recent generation phone is far, far more secure.


> The ATECC508A is not used correctly. It is used as a RNG and for it's crypto functions, but not for key storage.

Where did you get that idea from? Preparing a device consists of the following steps:

- Flash temporary configuration firmware

- Send keys to the device, which are stored on ATECC508A

- Flash actual u2f-zero firmware

So on the second step (configuring the device), the host writes keys on the device, and those keys are stored on ATECC508A. It happens there: https://github.com/conorpp/u2f-zero/blob/master/firmware/src...

I do agree that the code is bad though, but alas. I did consider reimplementing it properly, also on a more powerful chip, but I don't think I'll be able to find time for that.

Anyway, dirty code doesn't mean that the device is insecure. From what I see, among other things, ATECC508A is used properly.


> From what I see, among other things, ATECC508A is used properly.

You have completely misread/misanalyzed it. It's used only as a "hardware library" to avoid implementing ECDSA. Or perhaps you don't understand U2F well enough?

I can't understand why you are downvoted. It's a valid (albeit wrong) contributing comment.


I'd actually appreciate if you could elaborate on this. Which features of ATECC508A are not used, but should be used to make the whole thing more secure?


First, note that on your own site, the description of key generation is wrong. That's not how "U2F works", that's how a particular implementation derives keys. U2F tokens are free to create keys any way they like; it's opaque to the RP.

Most critically for u2fzero, the MCU sees the enrollment key. On enrollment, a new key is derived, and the nonce used in derivation is sent to the site as the key handle. On use, the key is re-derived from the returned nonce, given to the MCU, then loaded into the ATECC508A, then the "secure counter" is read and a signature generated. I haven't looked in detail at RMASK vs WMASK but it smells like they are not useful.

In the yubikey, the entirety of this code is implemented inside the secure element. In the u2fzero, except for the actual signature itself, the important parts of this code are executed outside of the secure area. The security of the key derivation is good because it prevents one-time access to the device from compromising future enrollments, but for the use case you propose (lost token) it isn't secure since the strongly-generated already-enrolled key is revealed.

A simplistic correction would be to limit the privacy preserving aspect of U2F to 16 sites (the key slot limit on the ATECC508A). This is a fine compromise since there are only 11 sites that accept U2F, per https://www.dongleauth.info/dongles/ . On the 17th reg, u2fzero can refuse to register, or it can cycle back to the 1st key, or it can keep reusing the 16th key, or it can regenerate (and thus unenroll) a key, or other choice.

Rather than a wrapped key, or nonce, or other data for key regeneration, the handle returned to the site is just a key slot index. In this way, the key is generated by and never leaves the ATECC508A.

So, if you're following along, you now realize that this defeats your backup methodology, since the keys would not be wrapped and passed back and forth between site and token. You could pre-generate all the 16 keys outside of the device and program 2 of them identically.

Or you could use the ATECC508A internal DeriveKey command. This can use just the appid as nonce and doesn't need an actual random nonce. This doesn't limit the number of sites that can be registered and also allows duplicates (by programming the same master key into all equivalent devices, along with a starting counter value). The handle in this case can just be a static token-id. If you wanted multiple users to actually register different keys using the same token, then you should add a random nonce. This prevents a site from identifying 2 users sharing a token, and learning something about their relationship. Or a site from identifying 1 user with multiple accounts. But the nonce isn't a security aspect per se, just privacy.

Regardless, your desired use case cannot be met "securely" with the ATECC508A, because the counter is not secure. Anyone with access to the lost primary token can just set whatever counter they like. This is mitigated by the token being a 2nd factor. I would further mitigate it by burning the JTAG fuse, and potting the ATECC508A. Then the MCU can't be reflashed or read out, and the ATECC508A can't simply be popped off trivially and put on a new custom board. If you mix a partial static data from the MCU, along with random nonce, as part of key derivation, then the MCU and ATECC508A pair are married and you have to depot both of them and now the attack is hard enough (beyond amateur hour) that you should have plenty of time to enroll a new token.

Let me just also laugh at the u2fzero site: "It is implemented securely."


First, I agree that there certainly are ways to do that more secure, like generate keys on-chip so that nobody knows them, but as I mentioned, together with security we have to think about a good recovery plan, otherwise one has to come up with some back-way which compromises this (good) security. And I still believe that having a second U2F token easily accessible at my home is way less secure than even the current implementation of u2f-zero with the backups as explained. It's all about tradeoffs.

Second, it looks like you're assuming that it's easy to just pick the MCU and read the code programmed into it. From my past embedded experience I know that e.g. MCUs by Microchip have read-protection bits in their config, so that if it's programmed with those bits set, one can't just read out the hex. Not saying it's totally impossible, but it takes considerable amount of time. This is the part of datasheet for e.g. PIC32:

> 27.2 Device Code Protection > The PIC32MX Family features a single device code protection bit, CP that when programmed = 0, protects boot Flash and program Flash from being read or modified by an external programming device. When code protection is enabled, only the Device ID registers is available to be read by an external programmer. Boot Flash and program Flash memory are not protected from self-programming during program execution when code protection is enabled.

HOWEVER, that said, after briefly looking at the u2f-zero's MCU datasheet (https://www.silabs.com/documents/public/data-sheets/efm8ub1-...), I failed to find any mention of read-protection, which does seem strange to me. Need to figure out more before I can say for sure. But nevertheless: suppose I get some time and reimplement something like u2f-zero but on the MCU which does have code protection, e.g. PIC32. Does it address your concerns?


> way less secure

First you argue that the device is secure. When I point out that it is not, you switch tack to argue that when you take other factors into account, it's still "better" than doing it some other way because of some other factor, so overall security is good.

You also want to conveniently forget aspects of your original argument at your blog, such as the defect with OTP that a phone can be stolen. A phone is much, much, much more secure than u2fzero.

You're also insisting that your particular use case for needing a backup is the only and best use case. If you want to design a system for your individual desire and needs, that's great, but this doesn't generalize. I'm guessing most people will not want their device baseline to be insecure so that they can have a 2nd insecure backup at hand.

As I hinted, the best solution is likely a cloud based token. This can be simulated with u2fzero by using a method similar to what @ecesena proposed. With either knowledge of an authorized external transport key, or by sharing a transport key, the atecc508a can share key material securely. I won't work out the details here, but basically you just need a single initialization-time secret that you store in the cloud. With that, you can take an arbitrary fresh unprogrammed u2fzero and initialize it to look like the one you lost. It has to be done this way (at user's direct control) because if you buy u2fzero's pre-programmed all security is lost.

The secret blob can be stored on your [secure] phone and backed up in a safe on a piece of paper, or any other storage scheme which a particular user may require to meet their needs. Personally, I would store it in icloud.

or if you used publickey, closer to @ecesena's proposal, you don't need to keep a secret. (at the expense of having to prepare the backup devices ahead of time.)


I apologize for being inconsistent in my arguing; indeed, I "switch tack". That's because it's actually hard for me to constantly keep the full picture in my head, and sometimes I start thinking like "omg what I've done it's so insecure", but then realize that I already thought about it and still decided to use that strategy because of this, this and this.

In fact I really appreciate you explaining all these details, and to be honest I need to find a decent amount of time to fully wrap my head around it; hopefully on the weekend I'll be able to. I doubt it will change my reasoning about my personal use case, but learning something new about security is never bad. Thanks for that! Hopefully I'll get back to you after learning more.

Also wanted to make 100% sure: are you arguing just about the u2f-zero being insecurely implemented, or the whole concept of backup with the cloned and securely-stored token also being bad? If we imagine that e.g. yubico starts producing matched pairs of tokens (I doubt it would, but still let's imagine), primary and backup with the boosted counter, which are implemented securely enough etc - would it be still bad in your opinion?


Diagrams help me. They do wonders to organize the thought process down to the most core ideas and cement it in your mind. I have planned to try "mind mapper" software to help with it but haven't done so yet. If you produce some kind of diagram I would be happy to review it with you.

> are you arguing just about the u2f-zero being insecurely implemented,

yes. the concept of a pre-cloned token is a good one. my main point of contention with you is that in your threat model, a lost phone is a security problem, yet a lost u2f is not. you need to revise your threat model or revise your solution.


> my main point of contention with you is that in your threat model, a lost phone is a security problem, yet a lost u2f is not

I realized that I could explain my concerns better in the article. My biggest issue with the phone is not that it could get hacked, but that I could just lose it together with my u2f token (because, you know, I always carry both phone and u2f token), and thus get myself locked out of my accounts. So it's not about somebody attacking me specifically to get my 2fa data, but just about some bad luck happening which results in losing both u2f token and a phone which was a backup for u2f token.

Instead of Google Authenticator we could use Authy, which synchronizes its database with the server, but Authy account could be recovered with SMS, which is anything but secure. I actually updated the article just now with that point.

I really want having a backup which is rock-solid secure and reliable, you know, more reliable than any other 2nd factor I have. So having a token bricked into the wall or something like that would work.

> you need to revise your threat model or revise your solution

When I have a chance to reimplement the same backup concept having something more secure than u2f-zero, then yeah I surely will revise the solution.


> MCU which does have code protection

http://www.break-ic.com/

in short, MCUs are not secure devices. the additional protection is great for surface level protection, and certainly sufficient for most threat models. but in this case, we are talking about an authentication token that is obligated to keep secrets. one cannot claim "secure implementation" without secure key storage.

even the atecc508a doesn't really meet the definition of secure. eg, there is no eal certification. no reputable vendor would use it in a commercial security device. it's great for its target market though.

unfortunately, there is no way for a hobbyist to acquire actually secure chips. something like the atecc508a will have to do.


although, for this implementation, you don't need to read out the chip anyway. you can just sniff the i2c bus to learn the keys.


> in short, MCUs are not secure devices

I doubt there is 100% secure device, you know, it's all about tradeoffs. Given enough time and resources, nearly anything can be hacked.

By the way, if an attacker has gained physical access to the primary token (and also to the another factor, a password, since token on its own is not too helpful), they don't need to hack it in any way: they can just use it to log into the account, add some other token and revoke the existing one.


> I doubt there is 100% secure device,

you might want to read up on EAL certification. Yes, there is no 100% secure device, but "secure" is about resistance to attack and an actual secure element is very resistant, as well as tamper evident.

u2f-zero could be hacked in 10s flat if you prep ahead of time.

> if an attacker has gained physical access to the primary token ... they don't need to hack it in any way

but they need long term access. with this device, short term access is enough to learn the key and then i can get access at a time of my choosing. i can also learn the counter value and you won't notice that i have gained access.


Yes using a normal MCU for U2F is a bit of a compromise since EAL chips are unobtainium. So flash read protection is the main barrier to physical cloning methods.

I'm not sure of any methods to bypass the read protection on normal MCUs in a 10s "drive by" attack. AFAIK, the special companies that provide flash readout (http://www.break-ic.com/), do so by decapping the chip and using involved imaging techniques. I suspect they get good at identifying various flash technologies, many of which are common to many chips. But don't think it's feasible for a drive by.

The I2C eavesdropping shouldn't be an issue because the ATECC508A does apply a mask.


> you won't notice that i have gained access.

If I lose my primary token, how can I not notice that? I look at it a few times per day, and I use it not rarely either, so I can't see how I can not notice if I lose it (even if it was replaced by the similarly-looking device).


> you can just sniff the i2c bus to learn the keys

RMASK and WMASK, which smelled not useful to you, are there exactly to prevent this from happening.


lol no. this is almost not even worth a response.

the key managed here

https://github.com/conorpp/u2f-zero/blob/master/firmware/src...

and here

https://github.com/conorpp/u2f-zero/blob/master/firmware/src...

just means that the "actual key" (unmasked) only lives in MCU memory for a short time -- the time from when the mask is applied and then until return to caller and memory is cleared, in the enrollment case I linked. In the authenticate case, it lives quite a bit longer because the stack space used for key storage isn't zero'd.

The atecc508 doesn't use or know how to use the mask. The actual key used for the encryption is passed in the clear over i2c.

(note that the key derivation you suggest is wrong because of the extra xor masking.)


This is a bit late, but the atecc508 does apply a random mask, see PrivWrite command in datasheet.

http://ww1.microchip.com/downloads/en/DeviceDoc/20005927A.pd...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: