This is huge! It sounds like they're finally going implement cross-device synced credentials; a move I've been advocating now for the last two and a half years[1].
Widespread support for this feature is, in my opinion, the last thing needed to make WebAuthn viable as a complete replacement for passwords on the web.
I went through the white paper, yet still don’t completely understand how it is supposed to work cross device, granted I’m new to Fido.
Let’s say I have the same key synched between my laptop, smartphone and tablet. When I wake up in the morning, will there be a ceremony of unlocking my phone (standard non Fido way I guess?) then unlock my tablet from my phone, then the laptop from one of unlocked devices ? With some more costly backup process in case I only physically have one of the device I guess ?
Sync in this situation means that the actual private key being used to sign in with the website is stored in a password manager as if it were a password, and the service vendor (iCloud Keychain[0] for example) is the one that syncs the key to other devices utilizing that password management service.
But this 'passwordless' trend is more about signing into websites - If they do implement singing into other devices, I don't think many people will do it (but it's possible - Windows Hello already allows you to sign in with a security key and disable signing in with the MSA password).
I think the other reply here might be missing something because while I have not read the whitepaper, the announcement touts these two benefits of deeper FIDO commitment:
> 1. Allow users to automatically access their FIDO sign-in credentials (referred to by some as a “passkey”) on many of their devices, even new ones, without having to re-enroll every account.
> 2. Enable users to use FIDO authentication on their mobile device to sign in to an app or website on a nearby device, regardless of the OS platform or browser they are running.
Point number 2 directly invokes cross-device, cross-platform authentication. It sounds like "you can use your iPhone or Android to sign into a website on your Windows PC" to me. Whether passkeys might actually sync between iCloud keychain and whatever Microsoft offers seems unclear but much less likely
I get that but I think OIDC could be extended to cover that too whereas the Authenticator or iDP is the local face scanner kr other biometric and then the rest ie exchange of token etc stays the same. That way there won’t be two completely separate path and that will defeat the purpose of SSO ie OIDC websites will authenticate with google or Facebook but FIDO enabled websites will work with face recognition. And it looks like there are already some implementation of this OIDC enabled face recognition https://www.bioid.com/facial-recognition-app/
1. You can use OpenID Connect as a protocol to integrate (via federation) with a site that provides authenticator management. This is AFAIK how most deployments work today - even if that OpenID Provider winds up being something you run or you pay to be run for you (AKA a CIAM solution).
2. There is an upcoming specification, Self-Issued OpenID Providers v2, which provides a redirection flow to an agent such as a native app or PWA app. This does look a bit different from traditional OpenID Connect though, as each End-user is effectively its own issuer with its own public key pair.
Since the browser and platform will have integrated support for FIDO/WebAuthn tech, they may still provide a better experience for equivalent scenarios.
"Security at the expense of usability comes at the expense of security."
Technically yeah, device-bound keys are "more secure", but not if that results in people continuing to just use passwords instead because updating your credentials on dozens of sites every time you get a new phone or security key is too difficult.
Synced WebAuthn credentials are at least as secure as a properly-used password manager, way more usable, and a lot more secure than passwords, which is what they're replacing. Besides, there's still the option of using separate device-bound keys for situations where even higher levels of security are required.
Are there any FIDO security keys that explicitly support backing up and restoring their master secrets? I would love to move from Username + Password + TOTP but my current workflow requires that I am able to regain access to my digital accounts using nothing but a few page paper backup including core service passwords & exported TOTP secrets.
Just so you don't feel alone with the replies being of the typical variety, I'm 100% with you. The flaws in the "backup token" approach are rehashed constantly but the world keeps turning as though they're irrelevant.
I look forward to hardware tokens reaching a popularity level where we see implementations in software and this conversation can be rendered moot.
Shout out to Mozilla and Dan Stiner for their work so far.
I wrote that U2F implementation in software because I wanted phishing protection without needing to carry a hardware key. Well, and to learn Rust :) It's certainly a security trade-off to just store secrets in your keychain like I choose to, it is not meant to be a replacement for a hardware key and in fact I have a Yubikey I use when the situation calls for it.
I'd love to use TPM and biometrics to implement U2F/WebAuthn on Linux and have a proper, secure solution. Similar to what Apple has done with Touch ID. But that's no easy task. TPM support is poor on Linux and other options like relaying auth requests to your phone for approval and storing secrets in the Secure Enclave is no easier.
> relaying auth requests to your phone for approval and storing secrets in the Secure Enclave
Like the acquired/abandoned https://github.com/kryptco/kr [key stored in a [...] mobile app] with iOS and Android apps all under an "All Rights Reserved"-source license?
Also, newer Macs have a Secure Enclave (supports 256-bit secp256r1 ECC keys):
I'd love to have something equivalent for Linux, but given that requires hardware support I think relaying auth requests to your phone is the closest equivalent.
Software ("virtual") implementations are already possible in WebAuthn. It's up to the service whether to allow enrollment via a software authenticator; most services will want to allow this, seeing as it's still way more secure than ordinary username/password.
For web apps/services, the browser needs to be involved here too, right? (And maybe the OS?) How can I tell Chrome on my desktop to use my "software token" instead of Chrome looking for a hardware token over USB or finding it via NFC, so the remote service can ultimately interact with my (virtual) token?
(I don't even want to think about how to tell Mobile Safari on my iPhone how to find my key)
EDIT: My ideal setup, I think, is an app on my phone that I can use as my token - somehow signaling to my desktop/laptop that it's nearby and can be used as a token and ideally popping up a notice on the phone lock screen when there's an authentication request so I can quickly get to it. Then in my app, I'm free to export and backup my keys for all of the sites I'm enrolled with as I see fit. I know, I know, maybe being able to export the keys makes the setup less secure, but I will trust myself not to accidentally give the backup to a phishing site. (And I do worry that I'll accidentally get phished using a TOTP app, so I'd like to switch to FIDO, but I don't want the pain of multiple keys)
I do NOT want to use my phone. It cannot be considered to be a secure device given the 'network' baseband control chipset will never be owned by the phone's buyer and has full access to the device.
Storing your keys in secure hardware on a phone is almost certainly more secure than storing a key in software on your desktop hard drive.
If you don't trust your hardware, it's almost always game over. Desktops have devices running dubious firwmare as well, but at least with a hardware key store, the window of compromise ends at the time you update – a stolen key stays stolen forever.
For me the biggest anti-phone argument is that they break and that is very common. They also run out of power. Hardware keys offload this to something that can go through the washing machine or ride in monsoon rains on my motorbike's keychain.
This is an urban legend. It ticks all the boxes of people who are inclined to be paranoid about these sort of things (I realize saying that may come across as a value judgment: it isn't), so it remains a popular meme. But the "baseband controls the main phone" is a meme that was maybe true for mid 00s dumbphones but not modern smartphones.
That's not to say that you should trust modern smartphones. That's up to you. It's just that in whatever "trust" means to you, the baseband urban legend shouldn't come into the equation.
While I can't find the reference, I remember reading about how Apple set up their connection to the modem in a very particular way to where it has its own co-processor for any code it needs to run, and the bridge between it and the main SOC is just IO for rpc and network access.
Well, the baseband always was its own processor - that's why it's a separate thing called a baseband. What you're thinking of is an IOMMU, it's like a firewall that prevents coprocessors from reading all of RAM.
If there are vulnerabilities on the AP (main processor) then you can hack it from the baseband, but also from the bluetooth or WiFi chips.
Your phone is a whole lot more secure than your PC. It has to be, because you carry it around with you all day and it's easier to steal, so it has to be resistant to a lot more things.
Caveat emptor on cheaper/older phones or ones you enabled developer modes on.
Yes, either the Browser or the OS will need to be involved. For example, for Webauthn on Chrome on Windows: chrome receives the request, then it calls the Windows Hello APIs. Then, that Windows Hello API shows a popup to read a physical security key or authenticate a virtual security key via face/PIN (this is protected by a TPM, but it's "virtual" since Windows generates it via the TPM but stores it encrypted on-disk).
To support a syncing fido keyvault, Chrome could very well redirect the calls to its own popup for choosing to 'use Chrome', or 'use another key', which would then call the Windows Hello API. In fact, Chrome already supports this[0], with 'Add a new Android phone' is simply how they're presenting Webauthn over BLE, and it works with iOS when passkeys are enabled in the iOS developer menu[1].
A much more secure way of doing this is to use the platform's/OSs most secure way of storing private keys, which in many cases is hardware (Secure Enclave on iOS, TrustZone or "real" secure hardware like Titan M on Android, TPM on Windows/Linux).
This is already supported by many browsers (unfortunately Mozilla/Firefox are dragging their feet on this one [1]) and gives you exactly the user experience you want.
This does not solve the backup issue. It's effectively using the phone or computer as a whole as a hardware key, which introduces multiple failure modes compared to external hardware keys while also adding to privacy concerns. It might have some extremely niche use for some use of on-prem devices in enterprise settings where the inability to sever the authentication element from the actual hardware might be convenient; other than that, TPMs are essentially a misfeature given the existence of smartcards and hardware keys.
The backup issue is solved by using an external authenticator for initial provisioning of new devices.
In a compliant implementation, you can add a new external authenticator from an existing trusted device, and a new trusted device from an existing external authenticator.
> while also adding to privacy concerns.
What concerns are you thinking about here?
> TPMs are essentially a misfeature given the existence of smartcards and hardware keys.
TPMs are essentially built-in smartcards (with a few other optional features like measurements/attestation, but these have never really taken off as far as I know, other than giving TPMs the reputation they have) and are very well suited for use as platform authenticators.
> In a compliant implementation, you can add a new external authenticator from an existing trusted device, and a new trusted device from an existing external authenticator.
You can kinda sorta do this with WebAuthn if the service you're enrolling into allows for multiple authenticators (the spec recommends this, but some services don't allow more than one). But then you have to repeat that enrollment step with all devices, for every new service you sign up to. Which is practically useless because an actual backup is supposed to be stored in a safe place that might be hard to get to.
> TPMs are essentially built-in smartcards
The question is why anyone sensible would want to have a smartcard built into their computing device. The only uses I can think for it are nefarious, i.e. allowing outside services to track the user and violate their privacy.
To securely store device-specific authentication credentials such as WebAuthN those used by WebAuthN/FIDO, for example.
> The only uses I can think for it are nefarious, i.e. allowing outside services to track the user and violate their privacy.
A smartcard would be one of the worst or at least most complicated ways to implement tracking: It can communicate with the rest of the system only through an extremely limited interface and can strictly only ever answer requests sent by the host, never initiate requests on its own.
To do anything nefarious, it would need a privileged companion service on your computer – which doesn't gain anything from being able to talk to the smartcard.
As an aside: Even TPMs are an extremely passive technology. The only thing that arguably makes them "evil" is the fact that they can perform measurements for device attestation, but it can still never transmit these on its own. That evil is pretty indirect, in that some service providers might only allow users to use TPM-enabled and sufficiently attested clients to access their services, and exclude open hardware and software.
That's coincidentally exactly what DRM is, and it's already here, and not at all limited to TPMs. I'm cautiously optimistic though that it's possible to strike a compromise and limit attestation to properly sandboxed parts of the system, e.g. only the parts of the GPU relevant to display copyrighted movies, without getting undue access to the rest of the system.
The smartcard part of TPMs is about as capable of evil (as far as your computer and your data on it is concerned) as a USB-connected mug warmer.
You can use the "smartcard part" of a TPM. This gives you secure/non-extractable key storage.
You can use the attestation/trusted computing part of a TPM. This gives you trusted computing, which can be used for DRM, if you install software or use a service using DRM and grant it access to your system. If you don't like that, just don't do that. (Today's DRM solutions don't even use TPMs anymore, for what it's worth.)
If everyone were forced to use TPM it probably would still be used as a DRM mechanism. My problem is with enabling the usage in the first place whereas I only have negligible security improvements.
The only think that kept DRM from leveraging it was indeed the low usage in consumer spaces.
This is such a restrictive security model though. Sure, devices are already identifiable. That is a security issue in my opinion. Yes, authentication is one use case where it is actually beneficial. But the security threats from this are far greater in my opinion even if you include phishing. Privacy is a concern for users even if it is conveniently ignored here.
> Your privacy is important to us.
Privacy statement of the group and I think it is a straight lie. This is a major if not the major security concern. TPM didn't address the problem and therefore isn't a too popular guest.
I think the whole point of HSMs is that you can’t back up (read: exfiltrate) the master secrets. Having said that, on certain Yubikeys you can store PGP keys on them, and put the same secret key on several different Yubis. If you’re relying on a hardware key it’s probably a good idea to have a backup key and make sure both are registered with whatever system you’re accessing. LastPass and GitHub at least support adding several different security keys.
> I think the whole point of HSMs is that you can’t back up (read: exfiltrate) the master secrets.
You're getting it backwards though. You are right that the whole point of an HSM is to not leak secrets when connected to a compromised computer. However there's nothing wrong with a HSM device that can be initialized with a "seed" of your liking, as long as that initialization step is done in a fully offline / airgapped way.
Ledger (whose CEO was, before creating Ledger), one of the member of the team working on the FIDO specs, make a "hardware wallet" for cryptocurrencies that can run a FIDO app. And it's totally possible to initialize the seed of your liking for your U2F device.
Now I did test this a while ago (out of curiosity) and it all working but I'm not really using it atm: I don't know where it's at regarding the latest FIDO2 specs.
But the point is: what GP asked for can totally be done.
I understand some may want to move the goalpost and say: "ok but then the problem now is not losing your piece of paper where you wrote that seed". But that is another topic altogether that does change nothing to the fact that you can have an HSM used for U2F that can be backed up.
It never comes into contact with the memory of a network connected system, therefore putting it beyond the reach of phishing or malware thus solving the overwhelming majority of risks actual passwords are exposed to.
I, too, find devices prevent exhilaration these days.
Seriously, though, paper is better for most people that managing device cloning or the like. Most people can have a notebook in their house as a backup-of-last resort. Asking folks to become HSM managers seems unlikely to lead to better outcomes.
The ability to have a backup does not imply any ability to exfiltrate the master secrets.
It is enough to have a means to wipe out any information contained in the device, including any master secret.
At that point, there should be a means to enter a new master secret in the blank device, before proceeding to use it normally.
If a device provides this feature and it does not contain any other secret information introduced in it by the manufacturer, then it allows the owner to have any kind of backup that is desired.
I am also one of those who would never use a security device that contains any kind of secret information that is not under my control.
> If a device provides this feature and it does not contain any other secret information introduced in it by the manufacturer, then it allows the owner to have any kind of backup that is desired.
Precisely. The Ledger Nano S (and probably the Nano X too) allows to do exactly what you describe, the very way you describe it (three wrong PINs, on purpose or not, and the device resets itself to factory default and, as you wrote, at that point it's unusable until you enter again a master secret (either your old one or a new one: the device has no way to know and doesn't care).
If I’m the one entering the master secret in, then the device is a glorified password manager. The point of an HSM is that nobody, not even the user, can access the secrets. I’m not saying there isn’t a use case for such a device, or that it isn’t possible, only that the security guarantees you get from it are different. The security model you’re describing is the same as someone entering their secret key in the “notes” app in a phone, leaving it in Airplane mode with FDE and wipe after a certain number of incorrect PIN entries. You can call that a “HSM”, but it’s not what I’d consider one.
A password manager that does not allow one to log in to the wrong site is still very useful. Also, just because you're entering a master secret in doesn't mean it's any easier to get it out. The user could simply be required to generate the master secret herself and back it up on her own.
Sounds basically like a key store/loader like this one: https://www.cryptomuseum.com/crypto/usa/kyk13/index.htm. I have a little experience with its successor. There’s a legit purpose for it but it’s a different animal then an HSM in my opinion.
The device that you name "HSM" is the kind of device that is suitable for a company to distribute to its employees to login into the company network or servers.
It is not a device useful to have for an individual user. On the other hand, hardware password managers are useful for individual users.
For HSM with FIPS140 Level 3 certification the master key can only enter and leave in encrypted form. Backup/restore and cloning is possible, but there are mechanisms like hardware and firmware validation to ensure only the same type device and certified venfor software can make use of it.
You'll need to generate the key on a less secure host to do that, though, which partially defeats the purpose of a hardware key in the first place.
As far as I understand, "real" HSMs (i.e. the expensive, rack sized type of security key) sometimes offer the ability to export their root key to other models by the same manufacturer using a specific ceremony.
Arguably this also significantly weakens the security of the keys protected in the HSM, but at least it does not automatically expose it to software.
But is that a problem though? I generated my own HSM/U2F keys throwing dice and the seed is basically just one 256 bit numbers. I did have, indeed, to compute a matching checksum (for the scheme I used represented the 256 bit numbers as a list of 24 words, out of a dictionary of 2048 words, where some bits of the last number acts as a checksum).
This only needs to be done once. For example by booting an old computer with no wifi capabilities and no hard disk from a Linux Live CD.
> You'll need to generate the key on a less secure host to do that, though, which partially defeats the purpose of a hardware key in the first place.
I kinda disagree with that. I generated my key by throwing physical dice. No random number generator to trust here. I only needed an offline/airgapped computer to compute the checksum and that program cannot lie: I know the first 256 bits out of the 264 bits so the program computing the checksum cannot lie to me, it's only giving me 8 bits of checksum.
Then I only need to trust the specs, not the HSM vendor.
Now, sure, my old Pentium III without any hard disk and without any WiFi, without any physical ethernet port, may be somehow compromised and exfiltrate data through its fans or PSU or something but what are the odds here? Especially: what are the odds compared to the odds of having a rogue HSM vendor selling you U2F keys for which it fully knows the secret?
I'd argue this requires less trust than the trust required in buying a pre-initialized HSM device.
> booting an old computer with no wifi capabilities and no hard disk from a Linux Live CD.
By doing that, you are increasing your trusted code base by several orders of magnitudes doing that. This might be fine for your purposes, but in a corporate environment, it might very much not be.
> Then I only need to trust the specs, not the HSM vendor.
You do trust the HSM (vendor) no matter how you use it. Ironically, the more modern a cryptographic protocol is, the more opportunity for surreptitious key exfiltration there is. This could be in the form of predictable (using a shared secret) initialization vectors, wrapped keys and much more.
You also trust an HSM to be more tamper-resistant and/or more hardened against logical attacks than a regular computer, or there would not really be a point in using one in the first place.
You can generate the keys inside the yubikey. Then just have two keys instead of a shared key. That’s actually better IMHO since that allows you to revoke one of you lose it instead of having a compromised backup.
Ah, sure, if your use case allows registering multiple keys that is indeed a good way to solve it. Unfortunately that's not always the case (as pointed out in other threads).
I want to avoid having to fetch my backup key every time I want to setup a new account. The backup key is kept offsite and thus inconvenient. It's offsite because that protects me from fire or other such catastrophic events.
This is what keeps me preferring password based security because I can backup my encrypted password database offsite with ease. Everything else provides hard path to recovery.
> As stated, you can have as many backup keys as you like.
That does not solve anything unless the backup keys are enrolled to each and every one of the services you use. Adding more backup keys and storing them more and more securely just makes it harder to be sure that you've enrolled them all to the latest service.
This is not an issue if users are explicitly allowed to enroll "virtual" soft authenticators that they can back up and restore as they see fit, but that's an additional requirement that comes at some compromise, since some services might instead want to ensure that you're enrolling a non-cloneable credential. (E.g. your physical bank or non-remote employer, that can easily verify your identity via additional means if needed to restore your access.) The WebAuthn spec allows for both models.
I think their point is that they don't want to have the window of vulnerability (to loss) when they have added a new account but haven't yet rotated their off-site key?
That said, the real answer is that FIDO keys can be synced by e.g. Apple (as described in more detail here: https://www.wired.com/story/fido-alliance-ios-android-passwo...). So you can potentially just make your offsite backup be a hardware key that gets you into your iCloud keychain, and (if you are willing to trust Apple) use your iCloud for backing up all your other accounts' keys.
That’s a pretty big foul. I use a different hardware key plugged into each workstation I use and then some “roaming” keys that I can use for backup, travel, etc.
To work around this I sync multiple Ledger devices with identical seed phrases which allow for duplicate FIDO devices that can be shared with any teams that need break-glass root account access.
I have 100s of passwords and dozens of TOTP keys in my password manager. Logging into every one of these sites with 2 keys, and having to re-auth with all of them if you lose one of those is unworkable. It only really makes sense for centralized auth solutions like you'd have at work, not for day to day personal things. I want a FIDO key that I can use for day to day things.
First of all, _most_ services is not _all_ services, so you have a use case here.
Also, you could make FIDO keys that support restoring but not backing up. If you could set up a FIDO with custom random seed _as an expert option_, then you could have a secure key, and keeping the seed private would be your expert problem.
I would adopt such a solution, whereas now I don't adopt the proposed solution because I cannot add a new service while having the backup key remaining off-site.
Maybe another solution would be to be to have _absolutely all_ services accept several keys (enforced by protocol), in addition to be able to accept adding an off-site key with only its fingerprint, but without requiring to have it physically.
Ledger (https://www.ledger.com) supports FIDO and lets you do backups. You really need a screen to do it correctly, otherwise there is little point in having an external device.
TacticalCoder has previously commented here (https://news.ycombinator.com/item?id=26844019) about the ability to set a specific seed on Ledger devices, so you can make a backup key that behaves identically to your main key.
This might be true for cryptocurrency transaction initiation, but in the WebAuthN model, what's the benefit of having a screen?
The result of a WebAuthN challenge procedure is almost always a session cookie (TLS channel binding if you're really fancy), so the only thing that an authenticator could display on your screen is "do you want to authenticate as user x to website y", which arguably does not add that much value.
> … so the only thing that an authenticator could display on your screen is "do you want to authenticate as user x to website y" …
That is exactly why you want it.
Consider, for a moment, that you have a key which is used to log in to your bank account and some other, much less critical site. Perhaps a GitHub account where you store some hobby projects.
Without an unforgeable indication on the authenticator to show what you're logging in to, malware can wait until you're logging in to the second site, and thus expecting a prompt to use the authenticator, but actually trigger the authentication process for your bank off-screen. You tap the button or whatever on the authenticator thinking that you're logging in to your GitHub account but actually all your money is being siphoned off to who-knows-where.
A key that signs whatever request is presented to it without any indication to the user of what the request actually was is dangerous.
If you have malware on your computer (that can compromise the browser), it can just wait until you actually log in to your bank and then grab the session cookie/proxy away your authentication.
It's a different story if the operation you are confirming with a security key actually can be rendered on the display, e.g. "pay $100 to someshop.com" (as in SPC [1]). In that scenario, there is actually nothing to steal except for the signed message itself, which would be useless to anybody that's not someshop.com, but given that WebAuthN almost always just yields a session cookie, I don't really see the benefit.
> If you have malware on your computer (that can compromise the browser), it can just wait until you actually log in to your bank and then grab the session cookie/proxy away your authentication.
Sure, but you might never log in to your bank from this particular computer precisely because you don't trust it. But you think it's fine to log in to your hobby account since that doesn't store anything you really consider important.
If you assume there is never any malware on the host then you don't need the key at all—the host can store the secrets and handle the authentication on its own.
Oh, that's a good point – I personally never use my security key at untrusted computers, but I guess this could be a somewhat common use case.
> If you assume there is never any malware on the host then you don't need the key at all—the host can store the secrets and handle the authentication on its own.
True, a permanently plugged in authenticator is largely equivalent to just using a password manager (which also prevents against skimming, if used exclusively via autofill, never via copy/paste), but unlike a password mananger, it makes unsafe actions explicitly impossible for non-sophisticated users. I'd consider this a strong advantage.
The comment you're replying to is about cloning WebAuthn credentials. This is a delicate operation, because you are effectively cloning your entire identity. So yeah, a screen seems reasonable for that purpose.
Ideally there would be a way to create "tickets" or something from an authenticator in advance and then use them for registration without physical access to the device. Then I could have 100 tickets from my backup on my master, keep the physical backup in a secure offsite location, and enroll new services using master + backup-tickets. When I run out of tickets, generate 100 more.
Being able to export/back up/restore master secrets would be nice too.
This sounds suspiciously like PGP subkeys. Having not read into how FIDO works, I'm going to now assume it works by supplying a "public key" to a third party, and the third party authenticates by having you encrypt a nonce with a private key. How far off am I?
FIDO involves creating a new public/private key pair for each website, to prevent cross website tracking. The keys are derived from a secret stored on the device, so the device doesn't need to store anything but that secret, which enables it to be used with a limitless number of websites.
Cryptographically speaking it's signing a challenge, not encrypting a value (which would be a public key operation), but generally speaking yes, that's the idea of it!
One of the things FIDO adds beyond a protocol for "plain" hardware-generated and stored keys is the idea of attestation, i.e. authenticators being able to express statements like "keys can never leave this authenticator" or "this key requires PIN or fingerprint verification" – all assuming you do trust the manufacturer.
However, you should not require attestation for public services. If you let Jim sign up with his dog's name as a password, but then refuse to let Sarah sign in because her FIDO device wouldn't provide "attestation" you're crazy.
Attestation probably isn't the correct choice for almost anybody, but the cases where it could at least make sense are if you're an employer checking employee authenticators, if you gave everybody a Yubico Fantastic XXV authenticator, you might decide it makes sense to verify the credentials enrolled are from Yubico Fantastic XXV authenticators. But still probably not. On the whole, everywhere I see UIs for managing attestation it makes me a little bit sad, because it's an attractive nuisance. Azure AD for example does this.
It might be that if someone has a hardware key which makes a sufficient attestation, you disable some OTHER supplemental authentication mechanism; like, if it says it requires a biometric, then you could allow auto-login in a context where you'd otherwise also require a pin or other second factor.
You can't say "Does it require a biometric?". The attestation is proof this device was issued by this manufacturer, so you're going to be whitelisting manufacturers and maybe specific product lines. Somebody technical needs to understand how to determine matches, and then somebody with authority needs to decide which ones are acceptable (maybe they're going to buy and test each model?), only if you're actively doing this is it even viable.
Now, I want you to imagine if corporate has to approve motor vehicles for the HQ's 600 vehicle car park. Of course the CEO's BMW M5 daily driver is approved, and for the first week maybe the random cars owned by people who regularly use that car park get whitelisted pretty easily. By the next month though, one of two things, either you're told just buy the same exact model of car as the CEO to "save trouble" or everybody just tells you to use a different car park nearby, and the HQ real carpark sits mostly empty because approval is a hassle.
The right answer, you can see, is to just not have a rule whitelisting cars at all. It's a bad rule.
If all you want to ask for is "Use a second factor" then you can do that, there's a flag in WebAuthn, and the resulting signatures have the UV bit flag set (all WebAuthn signatures have UP set, but Presence of users is distinct from Verification of users). Because it's a signed flag, even though it's a single bit, you can rely upon it unless you don't trust the device you enrolled. But, again, why? What is your threat model where your users do deliberately enrol untrustworthy devices but presumably never just helpfully enrol a device for a crook ?
Sure, don't require attestation for services where 2FA is completely optional.
But for sensitive systems/services, why not make use of the advanced capabilities that hardware authenticators offer? I'm using one in a corporate environment, and in my view it makes a lot of sense.
I'd also not be upset if my bank would let me bypass the mandatory "account restricted, call customer support" dance every time I initiate a transfer over $10 between my own accounts using only a "trusted" authenticator brand... And banks typically care a lot about security properties like "this authenticator does not allow extracting private keys".
The vendors here are proposing a platform synchronization method such that these are both backed up as well as shared across devices within a particular platform account.
There likely is a hardware key that supports export and import of keys (even if that winds up being a fork of say the Solo key firmware). However, as an end-user one doesn't want to accidentally forget to export keys for a while, nor do they want to worry about how to properly secure a backup. So, you likely would want additional infrastructure such as vendor software which would do this for you on a schedule.
There are interesting models which could work here, such as a factory-paired 'set' of keys being sold in the same package, where only the second key (the one you kept in your fire safe) has the necessary keys to decrypt and load such a backup.
The question is whether a security manufacturer would be interested in this, as the presence of such a mechanism may prevent them from getting certain security certifications and being able to sell/be used in certain markets and scenarios.
FIDO, at least, doesn't store per-site keys on the device. You only have to back up and restore the master key, which doesn't change and so doesn't need to be scheduled.
When generating non-resident/non-discoverable credentials, many authenticators will use a source of randomness, an internal secret and possibly the name of the requesting origin to generate a private key, and export either a seed value or a wrapped version of that key in a 'credential handle' during registration. You have to pass that handle back to authenticate someone, which the authenticator will process and check to see if they were the one that issued it. Such credentials are only usable for secondary factors, because you need to know what list of registered handles to pass to the authenticator, which means you will typically need to know who the requesting party is.
Web Authentication and CTAP 2.x added the notion of discoverable credentials, which do not require such handles and as such are usable as a primary factor for authentication. A site can simply ask into the void "do you have any credentials for example.com" and potentially get back an answer. These necessarily require state.
Several of the platforms do not want to deal with the security ramifications of exporting wrapped keys, and simply generate and store keys even in the traditional U2F case. This is actually why the terminology was changed from 'resident' to 'discoverable' in WebAuthn L2 and CTAP 2.1 - a non-discoverable credential has the old behavior where you have to supply the handle to get back a response, but there's no guarantee that credential won't be resident in some state store of the authenticator.
Devices like Ledger already have seed phrase backup/restore via BIP39, but this is only safe or practical on devices with a screen which Yubico is very adamantly against ever supporting.
I wish FIDO was built into the phones (enclave) requiring a biometric and passcode. For 99% of users this would be superior to email/password and get rid of a lot of hacks/phishing. It doesn't require extra hardware to buy and simply requires a minor protocol update to have the challenge on a laptop/desktop show as a QR-code (or could be sent via BT). The mobile sends the response out of band to a destination set at creation.
For users with a greater threat model (worry about enclave being hacked), they can use physical FIDO keys.
Not just Google accounts, most Pixel phones (I think I have a Pixel 2 here) do WebAuthn. I use it for GitHub (occasionally) and my private pastebin setup which is WebAuthn protected for ease of use - and I could use it for Facebook (but I never book faces on my phone) and other services.
One bonus feature does need the Google account. If you're signing into say banana.example with WebAuthn, using Chrome on a PC, and Chrome can't see any FIDO authenticators plugged in, it will try your phone! It asks Google, hey, does this user have a phone (Chrome is signed in to your Google account) ? Google says yeah, they have "amf12's Pixel 6". The Chrome browser uses Bluetooth on the PC to say "Hey, amf12's Pixel 6 are you out there? Can you do WebAuthn?" if your phone hears the Bluetooth message it's like "Hi, amf12's Pixel 6 here. Standing ready to do WebAuthn" and then via the Google servers it's arranged that your login attempt on Chrome, on the PC, is proxied to the phone, where it appears on screen ("Do you want to log in to banana.example as amf12?") and you can OK that from the phone. Nice work flow actually, although the technology is remarkably complicated.
The proposal here is using iCloud Keychain, leveraging the secure enclave. The only catch (for some security-minded, a bit one) is that iCloud Keychain acts similar to a resizable HSM cluster.
Let me be more specific, this should work in apps not just the browser and should work with my logging into my laptop in the browser and leveraging my phone as a FIDO "key".
Apple, Google and Microsoft all have native API variants of the Web Authentication API. These typically use entitlements requiring authorization back to a website. This means e.g. a Twitter application could leverage the same authenticator registrations as twitter.com, leveraging both platform and roaming security keys.
>... and should work with my logging into my laptop in the browser and leveraging my phone as a FIDO "key".
The press release details this commitment; for instance, I can use an Android phone to log into a website on my Mac. An example of such an option should be visible on all shipping desktop Chrome browsers if you do a Web Authentication registration or authentication request (I believe unfortunately currently titled something like 'Add an Android phone'). On the Apple side, being able to leverage this is currently sitting behind a developer feature toggle.
One can hope that this will be extended to say the Windows platform itself - at that time, I would expect be able to use my iPhone or Android phone to log into any Windows machine on an AAD-backed domain.
> Are there any FIDO security keys that explicitly support backing up and restoring their master secrets?
Yup there are for sure, for I tried it and it works. Now: I tried it out of curiosity and I'm not actually using it atm, so I don't where it's at but...
I tried on Ledger hardware wallets (stuff meant for cryptocurrencies, but I tried them precisely for the U2F app): initialize the device with a seed of my own and then register to FIDO2/U2F/webauthn sites. Worked fine.
Took a second hardware wallet, initialized it with the exact same seed: boom, it's working fine and I could login using that 2nd HSM device as the 2FA.
Note that as soon as I used the 2nd device, the first one wasn't working anymore: if I wanted it to work again, I'd need to reinstall the U2F app on the HSM device (the way the device work is it only accepts apps that are signed, and that is enforced by the HSM itself: the HSM has the public key of the Ledger company so it can only install "apps", like the U2F app, that is actually signed by Ledger... I'm not saying that's 100% foolproof, but it's not exactly the most hackable thing on earth either).
The reason you cannot use both devices at once is because of how an increment number is used: it has to be monotonicaly increasing and when the app is installed on the HSM, it uses the current time to initialize its counter.
I haven't checked these lately: I know the specs evolved and I know Ledger said they were coming with a new U2F app but I didn't follow the latest developments.
Still: I 100% confirm you that it's not only doable but it's actually been done.
> requires that I am able to regain access to my digital accounts using nothing but a few page paper backup including core service passwords & exported TOTP secrets.
EDIT: you basically save a 256 master seed as a list of 24 words (out of a fixed dictionary of precisely 2048 words, so 11 bits of entropy per number). 264 bits altogether: last word contains 3 bits par of the seed and 8 bits of checksum.
Trivial to write down. Very little chance of miswriting it for: a) you must prove to the HSM you wrote your seed down correctly and b) the dictionary is known and hardly any word can be mistaken for another.
I think FIDO is not meant to be backup at first place. It's more a derived key exists to authenticate certain device safely without being stolen to authenticate other you aren't expected. Make it easily backupable actually defeats its whole purpose if it is intended to be used this way.
And for service that actually want it to be used as major key. I think they can just make the one authenticated able to authenticate another(and even decide whether this new device can auth yet another or not). (Like the way google use: popup on user phone, and ask if user would like to let the computer login.)
I think the one we actually need is a common protocol to authenticate new fido device from existing one. Although you can do it currently, every website have different flow to do it. And there is not currently a common way here. A common and machine understandable way to auth new device from existing one may ease the pain.
At least the WebAuthN standard seems to be moving in a different direction [1], which is also surprising to me.
In a nutshell, it will be possible for relying parties (i.e. websites) to detect multi-device/backup capable authenticators if required, but disabling multi-device functionality would require a very explicit opt-out, not an opt-in, on the relying party's side.
That seems to make fido just a non human readable/rememberable account/password. A somewhat downgrade from original hardware enforced implementation. But also make it more usable to majority of people, because keep something without losing it is just a pain to many people(where is my fxxking key goes again?). And it is still 1000x better than people using same password on every website.
It's much more than a non-rememberable password: One of the most important attributes of WebAuthN/FIDO is that it's fundamentally impossible to fall victim to phishing.
Assuming your browser isn't itself compromised, it is impossible to authenticate using a key for service.com on evil.com. Passwords can't do that. (PAKEs or other challenge/response protocols theoretically could, if implemented in the browser and not on websites, but that's a different story.)
However it will be weaker to malware compares to the original one. Because if you can export or sync the key, then malware probably also can if your device is compromised.
The guarantee that key is never cloned unknowingly even machine fully compromised isn't in this new model.
> The guarantee that key is never cloned unknowingly even machine fully compromised isn't in this new model.
True, that was my concern with the new specification as well. But I believe the main problem will be account takeovers, not malware.
> Because if you can export or sync the key, then malware probably also can if your device is compromised.
Not necessarily. There are ways to still keep these keys only in secure hardware while allowing synchronization if the secure hardware supports service-provider mediated attestation.
HSMs usually support a similar process, where keys can be copied to a different HSM by the same vendor, preserving all of their usage and authentication restrictions (i.e. only the same set of authenticated and authorized users are able to use them).
This conceptually works by e.g. a new and old device, or a service-provider side HSM-secured backup and a new device, establishing a secured channel, attesting their state to each other and then copying the credentials over that secured channel.
By necessity, this includes the service provider (or at least their HSM code) as a trusted party: They are the one that ultimately gets to decide which new devices get added to the synchronization set, and under what circumstances (running a recent hardware and software version, multifactor authentication, providing a high or low entropy shared secret).
Of course, all of this also vastly increases the trusted computing base, and this might well not be appropriate for high security organizational environments.
Yes there are. FIDO specifies different authentication levels and level 1 allows access to the master keys (it allows for pure software implementations). I think this page gives a decent overview of the levels... https://fidoalliance.org/certification/authenticator-certifi...
With a sufficiently programmable hardware key, yes, you can back up the secrets. See an enumeration of methods in [0]. Be careful if you plan on doing this; make sure the tradeoffs make sense to you. You probably want to do the programming from an airgapped, trustworthy Linux machine.
Beware that if you do this and lose your primary key, or if it is stolen, then an attacker can impersonate you. Setting up multiple unique keys is probably more useful in general, even if it's more cumbersome.
You can have a backup key and keep it off-site (e.g.: at a family member's place).
If you lose your main key, you can resort to that one to re-gain access.
Keys that allow backing up the secret material are tricky, since they could potentially be cloned and someone would have a backdoor with you being unable to know of it.
Hardware wallets like a Ledger allow you only once, on initialization, to backup the initial random seed to paper in the form of 24 english words via the BIP39 standard.
You can use this seed by hand or on a duplicate device to deterministically recreate all keys be they webauthn, pgp, bitcoin, or otherwise.
I use the "multiple security keys" approach, and the biggest problem is keeping track of which keys are registered with which services and making sure the list is up to date. A few examples of situations where this is a problem:
1) I don't keep all of my keys on my person, so if I want to sign up for a service when I'm not at home, I have to remember to go back and add my other keys at a later time. If I wanted to, for example, keep a backup key in an offsite location such as a safe deposit box, this would be even more painful.
2) If I lose a key, I need to go and change every service to deactivate the lost key and add my replacement key. This is both time-consuming and error-prone, as it requires me to keep a full list of providers that I use keys with somewhere.
3) Some providers do not even allow you to register multiple keys.
1. You register both the primary and the backup key with every identity provider (ie GitHub)
2. You only carry the primary key with you at all times. You keep the backup key in a physically safe space (ie next to your birth certificate).
3. In case the primary key gets lost, you make the backup key your new primary key. You can log in with it everywhere because you already registered it in step 1.
4. You order a new key which will become your new backup key.
You'd also need some way to revoke keys signed by the root if a valid hardware key were lost, stolen, or confiscated.
I think Yubico will actually do something like this for large enough customers, though revocation is left as an exercise for the customer. When I worked for AWS, I was issued a couple company YubiKeys, and there was a web portal where I could revoke a token's association with my account.
How would you add a new key at a later point in time (i.e. after your initial registration of e.g. a main and a backup key, after having lost the main key and wanting to add a new backup/main key)?
The FIDO/WebAuthN model does by design not include a stateful/centralized authority that could maintain the required state for any such solution.
A backup key is a good idea, but there needs to be a way to enroll the backup key for a new account (ideally automatically) without the key being physically present, since otherwise you can't (practically) store the backup key off-site. To this end, you should be able to enroll a key using only the public part of the keypair.
If you sign up using one key, do the other keys work with that account? Unless it does, you're greatly increasing the complexity of creating new accounts anywhere.
That's basically what I'm getting at. Do I need to do significant amounts of extra work to keep an off-site backup in another state?
I don't personally consider it greatly increasing complexity. At account creation I register the Yubikeys at the PC and on my keyring. When I first login from a different PC I use the Yubikey from my keyring to login and then register the Yubikey at this new PC.
Yep! Just store your backup key in a safe-deposit box with your bank.
Then go get it every time you sign up for a new account so you can make it the backup for that account
then go store it again.
and again. and again. and again.
oh no! you lost your key! time to go to the bank to get your backup, sign in to all the accounts, remove the old key, register a new backup, oh wait, got to wait for the new backup to ship, so i guess you can't do that yet. hope you don't lose your key in the meantime, anywho, time to spend a few hours painstakingly removing your lost key from all the 9 thousand sites you use.
yay! its a week to a month later, you finally got your new yubikey shipped, time to go log into to 9 thousand websites again to set it up as the backup for all of the sites.
Ok, time to take it down to the bank.
whats this? a cool new app my friend wants to show me, ok, time to go drive to the bank and get my backup key out of storage and sign up for this cool new app.
You know, this whole driving to the bank thing, its kinda inconvenient, maybe i should just store it in my closet safe.
What do you mean the gas line under my house exploded? but both my yubikeys are in there!
----
The above is fiction, and even under fiction it seems ridiculous how this would really go is even worse:
"Go get my backup key to use for this new app my friend showed me? fuck that"
.
.
"What do you mean i can't reset my password, but i lost my yubikey!"
"No, i didn't want to get up to grab my backup token when i was registering."
"Oh wait! i bet i still have the recovery codes as a pdf in my downloads folder. its a good thing no viruses ever think to look in there"
More advanced FIDO devices like the Ledger allow you to backup the initial random seed allowing you to create a duplicate device from the backup any time you wish. No sites you signed up with will know or care that you swapped devices as the new device will generate identical keys via a deterministic KDF from the seed.
You can put this seed far away and would only ever need it when you wish to replace a lost or broken authentication device.
Aside: no US major banks issue safety deposit boxes anymore other than wells fargo which will stop issuing them soon as well.
By design, ideally not. WebAuthn optionally includes (as a 'SHOULD') a signature counter concept that allows relying parties to be confident that the attestation isn't coming from a previously-cloned version of the authenticator.
The specification states that cloning should merely be included in the service's underlying threat model. E.g. you might well be able to log in from an authenticator that fails the "signature counting" step if it's expected that the authenticator would allow for backing up and restoring its stored credentials.
If an "insecure" authenticator reveals itself at enrollment, the only "unevenness" is that attempts to enroll it might fail, perhaps with a request to use a securely attested authenticator instead - you would never be locked out from any service after the fact. This is better than "cloning" a supposedly secure device and then failing a count check while trying to authenticate.
Yeah; being told "you can't use that here" for some sites is a pretty uneven user experience, wouldn't you say? Not sure why the scare quotes are necessary here.
Is this actually needed? Looks like the online part of this is just WebAuthn, which could be supported by the same tools we use for TOTP. You would "enroll" a visible master secret that you could then back up and optionally store in a hardware security key. The device itself wouldn't need to allow for extracting the secret again, because you backed it up at enrollment.
I've resisted switching to a hardware key because I know that I'm going to break it, and that seems like a huge pain in the ass. I really want to be able to make a couple of backup keys, or maybe put another way, I want to be able to put the private key on the device myself, I don't necessarily care that the key is generated on the device and never leaves the device. I don't care if that slightly reduces my security - I'm not protecting nuclear weapons, my threat model is not state actors trying to attack me, my threat model is me leaving my key in my pants pocket before putting it in the washing machine.
I wanted to enable YubiKey for my Bitwarden account and feared exactly this. Now I've spent 200€ for 4 YubiKeys, so if I ever miss or break one, I will hopefully be fine. But imagine telling your grandma to buy at least 2 YubiKeys for 100€ just to make your Facebook login more secure.
I was about to say the same thing. These things are quite sturdy and if you loose your key (as people do) you retire the old one and make a new. These things are neither expensive nor irreplaceable. Of course if you loose your key it's going to hurt, as it should.
I've had a Yubi Key for almost 5 years now. Zero issues.
Has anyone tried a Ledger or Trezor device for something like this? Your FIDO U2F private key is deterministically generated [0] based upon your seed phrase, which you can backup, and restore on other devices.
At least Ledger actually does support U2F as an installable application, but that's the predecessor to FIDO and has some weaknesses in comparison. I'm also not sure whether WebAuthN supports legacy U2F authenticators without the browser performing some protocol translation.
I have a Ledger as a backup key. Keyword is backup since it's less convenient to use than a Yubikey due to needing to put in a pincode first. Though that could be a security feature in and of itself.
What do you mean by a honeypot? Both are pretty well trusted devices, and users easily have tens of billions of dollars deposited on them. Given that, I feel pretty comfortable using them for 2fa.
That's exactly why FIDO [1] and WebAuthN [2] are moving towards a concept of backup-able/cross-device-sync-able authenticators.
That is arguably less secure in some contexts, but there are workarounds, and I do see the point that for most services, availability/key loss recovery is as much of a concern as is security.
Right. Some core services get this treatment, like email and important online accounts. For others I rely on reset mechanisms tied to those email accounts if I lose the primary key and haven't had the chance to register the secondary. Every few months I'll sync up anything that has been missed.
It's not perfect, but it's a hell of a lot better than TOTP.
This. For a while I tried to keep a list of accounts I needed to add my offsite key to, and then every year or so I'd retrieve the key, and add in bulk, but that became way too complicated.
While not ideal, I'd be happy if I could register with the public key of my offsite key or something similar. Really I think there should be a way to register a public persona, and add / remove keys from that persona at will.
Or, just let me (somehow) generate multiple hardware devices with a shared seed.
People have made the same incorrect assumption that fido can't be backed up many times and it is a common misconception that halts adoption of the best security win since TLS. I feel important to correct this in tech circles so we start telling friends and family to setup a solution to the most common account loss problems.
Also, BIP39, the only backup spec that exists for FIDO atm, originated in the Bitcoin community where key loss is a very expensive problem that needed an elegant solution.
BIP39 can be used to backup any type of asymmetric cryptographic key that could ever exist in a human friendly way but sadly I am not aware of any vendors implementing it outside of hardware wallets which are general purpose tools
They can be used for PGP and FIDO and password management without using them for cryptocurrency and this is totally valid.
I absolutely agree that this is a thing that needs to be solved. I cobbled together my own solution using undocumented bits of the Solo firmware[1], but that's not nearly usable enough for average users.
But here's the problem: outside of the hype bubbles, cryptocurrency stuff does not have a good reputation. If the only thing that supports this markets itself as a cryptocurrency wallet, that is going to hurt adoption. People generally do not buy devices in which they actively do not want the main feature.
(I did remind myself of DiceKeys[2] while looking through my notes to find [1], but that has its own problems, such as "oh god what are you doing why does this involve OCRing a photograph of dice on my phone".)
This problem is already well solved and deployed to millions of people. That it is my point.
To your dicekeys example, a much better solution, IMO is using bip39-diceware which allows you to roll for 256 bits of entropy in the form of 24 BIP39 words with dice, using only a paper worksheet. You can use these with a KDF to determinstically generate any type of key material be it for PGP, FIDO2, mutual TLS, or whatever you like.
BIP39 is a general purpose innovation in human friendly cryptography, and so are the general purpose personal HSM devices that support it.
I don't feel it is productive to balk at a generally useful technology just because one dislikes the biggest audience creating demand for it.
Someone can have a religious objection to porn and still enjoy the bandwidth growth and other improvements to the internet that porn demand helped create.
Just because a toaster is marketed for toast, does not mean you can't enjoy it for pop tarts. Just because a pressure cooker has a "chicken" button, dosen't make it any less useful to a vegan.
The examples are endless.
Those that are fundamentally against experiments in decentalizated governance should at least try to appreciate that space presents high stakes security problems that engineers will innovate to solve.
Many of the best cryptographers in the world, like Dan Boneh and team at Stanford, spend a huge amount of their time focusing on innovations in privacy, computational effenciency, and cryptography to meet demand created by popular decentralized systems experiments.
If anything, buy products like hardware wallets that improve security for you and recommend their teams spin up marketing and product development approaches more inclusive of customers that have moral objections to decentralized governance and value storage use cases like yourself.
Okay, so first of all, you know very well that "moral objections to decentralized governance" is not the problem people have with blockchain technology.
But beside that, perhaps I didn't quite make the point of my comment clear enough. You're trying really hard to convince me of things. I know very well how this works, and what problems it solves, and I do think it's a good solution to this problem.
The main reason I'm not going to buy one of these isn't anything about the technology; it's because I already have a solution for myself and I have no reason to bother switching to another solution.
This isn't about my opinions. I'm explaining why the general public is going to continue to ignore this otherwise valid solution. The marketing around it actively ties it to a thing that most people have negative opinions of, and makes the feature they actually want seem like an afterthought. Even here, you repeatedly refer to it as a hardware wallet, because that has always been the primary focus of those devices.
The effect of this is that the average non-blockchain-person just sees you posting a lot of comments in a tangentially related thread trying to sell them on blockchain tech. Do you see why this, from the perspective of a non-blockchain-person, is counterproductive?
You will continue to have trouble getting people to adopt these devices until either the marketing focus changes or public opinion on blockchains changes. And one of those is going to be much easier to accomplish than the other.
I am not telling you this to bash blockchains. I do have negative opinions on that space, but I don't care to debate them here; nothing would be accomplished by either of us by doing so. I am giving you advice on the way your message is perceived by others.
The services I interact with that support WebAuthn usually only allow you to register one key. Backup and recovery is a confusing puzzle for most of these services.
Tell the services you interact with that they're basically going against the spec.
"Relying Parties SHOULD allow and encourage users to register multiple credentials to the same account. Relying Parties SHOULD make use of the excludeCredentials and user.id options to ensure that these different credentials are bound to different authenticators."
This has been talked about in HN comments almost daily for like a week — does anyone from AWS/Amazon read this forum, or are they too busy performing blood sacrifices trying to recruit graduates?
It's actually horrible! Even key rotation is horrible!
My yubikey is getting to about 10 years old, and I have replacements for it but find it very difficult to switch. It will eventually fail as an things do and it will be problematic.
The problem is that I have several dozen accounts connected to it and I don't know all of them. So either I'm carrying and trying multiple keys at all times or not getting into a site that haven't been rotated yet.
Multiple keys on an all sites is also basically impossible. You need to register all the keys, and ideally those keys are in different places.
I’m going to need to work this out soon. I picked up a pair of new YubiKey 5Cs yesterday with their sale. I’ve been using a YubiKey Neo for years for U2F, TOTP and GPG.
Moving the GPG key is easy - though I might try using the FIDO2 support in SSH instead. However for every TOTP and U2F key I’m going to have to re-enroll the new keys… It feels like there should be a better way.
I've recently started to track my service dependency graph! So like, to keep using github I need my password store, my email and one of my two security keys. To use my email, I need...
Please contact me if you're interested, I will release the tooling I have.
It would indeed be nice if you could "cross sign", CA style, a new yubikey with an old one, and that would somehow get passed along to the various services.
I have not thought about the various attack vectors that this may or may not enable though.
FIDO2 with resident SSH ed25519 keys works great, just make sure the OpenSSH client and server versions on all the machines you’ll be using the key on support it. I wish there was a way to sign Git commits somehow using them instead of PGP.
I keep an off-site backup at my parents house. Right now it includes a printed copy of my backup codes, so if my house burns down and everything is a total loss here, I at least don't have to start from zero. They live far enough away that my backup offsite backup can get to be a few months out of date but that's usually fine. (If I were to make a major change in something I'd make a special visit)
I don't want to spend a bunch of time when I visit to find that key and add it to all of my new accounts and hope I got everything - I want to make a backup of my current key right before I visit and when I visit, I just put the new backup key in the desk drawer and take the old one home with me.
I have much more backups of my workstation etc., should I now buy dozens of crypto hardware key thingies and constantly switch them around to match the backup disks?
For those who do offsite backups: Is an offsite backup possible across the Internet? Or do you have to physically drive the key to the offsite location?
When I create a new account somewhere, does that mean I have to move N backup keys out of their drawer to the workstation and register each of them on the account?
And how to even create a backup and keep it in sync?
With backup disks, it is a matter of shutting down the machine, removing one disk from the RAID1, and you have a backup (the removed disk is the backup). Or doing "dd if=..." if you don't use raid.
Is something as simple possible with those fancy crypto toys? Or is some arcane magic required to copy them?
Is this perhaps all as usual: An attempt to get more control and tracking of users, disguised as "security"?
With devices that support BIP39 backups like the Ledger or Trezor, you are backing up the random seed that generates all possible future accounts deterministically.
Backup once, setup 100 accounts, lose authentication device, restore backup to new device, regain access to all 100 accounts. Easy.
Most people have smartphones which ship with WebAuthn so they are good to go. Granted phones are like $500+ so by contrast a Yubikey is a much cheaper alternative for a secondary backup device for most people.
I want to have both, a hardware key and a password. A password alone always has to grant me access again, ideally without needing to register a machine too. Honestly I hate that this is so widespread already, I want my devices to be a non-recognizable ghosts for security purposes. An access log would be more appreciated.
I have a Yubikey and use it as a part of my passwords. But I would like to have a second master password that I only use in emergencies. Yes, it is easy to forget so make sure you don't. But a password that is rarely used is also rarely exposed to third parties.
To be honest, the most problems I see with FIDO is the lacking trust in the alliance of companies behind it but I don't know too much about the technicalities of FIDO.
Hear hear. I already have enough to worry about besides this little magical security wand failing/getting lost. I require a bullet proof method-of-last-resort mechanism in place for the inevitable day when the fob is no longer available.
I have abused and soaked every model of Yubikey. I even melted the casings off every model with acetone to lookup chip specs and Yubico responded by switching to a new unidentified glass hybrid compound no common solvents seem to impact.
In all cases the Yubikeys still worked even as bare abused PCBs. You need a blowtorch or a drill to break one.
Without making any explicit argument for it, what I see coming out of Fido and U2F are really changing the importance of the long-standing "something you have, something you know..." mindset around security. That prior mode was not helping us design system that take human capabilities of the user into account.
Prior security seemed to focus entirely on attackers, and their agency, and what they could potentially do. But we also need to pay attention to what users can do in order to build a secure system. Requiring users to read domain name strings, potentially in Unicode, every time, and make sure they are correct, to prevent phishing, is a really bad design. Instead, have the website authenticate themselves to the user, automatically, and have the machine do the string comparisons.
Similarly, the distinction between user and password for a biometric doesn't make much sense in this case. It's neither. The user is identified by a different mechanism, the biometric is merely a way for the device to see that the user is there.
There are always lots of attack modes for biometrics, but they are convenient and good enough to capture nearly all common and practica attack modes. And a huge problem of the 90s and 2000s security thinking was focusing on the wrong attack modes for the internet age.
> Without making any explicit argument for it, what I see coming out of Fido and U2F are really changing the importance of the long-standing "something you have, something you know..." mindset around security. That prior mode was not helping us design system that take human capabilities of the user into account.
Don’t think that’s quite true. It’s continuation of the old “something you know”, “something you have” and “something you are” authentication factors, and the idea that at least two factors should be used to authenticate.
The username/password approach is only a single factor, “something you know”.
Common 2FA solutions use “something you know” (you’re password) and “something you have” (a device proven via OTP or SMS).
FIDO with biometrics trades all that for 2FA driven by “something you are” (biometrics) and “something you have” (you’re devices Secure Enclave).
You don’t send you biometrics to the service your authenticating with. Rather you’re using your biometrics to prove “something you are” to your device, which your device then mixes with a private key which proves you’re in possession of a known device. All of that is then used to authenticate with your service.
In order to enable a cloud synced private key, you need the syncing process to require 2FA to enable new devices. The 2FA process can be clunky and slow, because you only need to do once per device enrolment. Indeed it’s need to be clunkier, because you don’t have a biometric factor available for use, as the enrolment process is normally used to onboard both a device and a device specific biometric factor.
After that you’re device becomes a know authenticated device, which can be used as “something you have” factor for authentication.
All of this isn’t a change from long standing authentication strategy. It’s just a refinement of process to make the underlying authentication strategy user friendly.
You make very good points! The user focused design work of the FIDO group feels like a large departure of traditional designs, but need not be viewed that way in terms of those elements.
I think, at the end of the day, there really isn't much of a difference between the two, for authentication. One is just a public part of who you are, and remains fairly static.
An argument for keeping the username separate is it is often used for identification. That is, you identify on this site as alberth. Not as any biometric scan. Even if you change which finger you want to use to authenticate, you'd still be alberth to everyone else here.
I think there are arguments on favor of letting you change a display name. Probably still would keep a name that is static. (What Twitter does?)
The expectation is that you also have “something you are” as provided by your devices biometric authentication.
The standard allows for service to demand that the authentication device performs an additional factor authentication. Which is usually either a PIN or biometrics, and your device attests to doing this during authentication.
So then you have two complete factors “something you have” (your phone) and “something you are” (biometrics) or “something you know” (unlock PIN for device).
> But I'm responding to GP that said biometrics are a username, not a secret, which I agree.
If you were sending an actual copy of your biometric data to the remote authentication service, then maybe you could make that argument.
But that never happens, no FIDO biometric device sends a biometric fingerprint that could be reproduced by a different device. The device authenticates you with biometrics, then uses that data to unlock a private key, which is then used to answer a challenge-response request from the authenticating service.
If you don’t the device, then it’s pretty much impossible for you to correctly answer that challenge-response, despite being in possession of the biometric features that device would use to authenticate you.
So you can’t use your biometrics as a username. Because the device measuring the biometric data pushes that data through a mono-directional, randomly generated (at device manufacture), hash function, that exists within that device only. Take your biometrics else where (I.e same device type/model but different physical object), and you’ll get a different output even with identical inputs. Which would be a pretty useless username.
> I'm not sure something you are counts as a security factor.
Something you is a an authentication factor that doesn’t need to be secret to be secure. That’s the whole point. You can have a high-res 3d model of my finger but you can’t create a human with my fingerprint.
In the same way that the security of something you know is a scale based on “how difficult is your password to guess” or “how hard is it to crack the hash” the security of something you are is a scale based on “how difficult is it for someone to create a fake that tricks this specific machine into thinking it’s reading metrics from a live human.”
The security lives in the system reading the metrics not your body which is why you don’t have to rotate your face every 90 days.
A cheap fingerprint reader is the 4 digit pin of something you are. Retina scans that take temperature, look for blood flow and eye movement are the correct horse battery staple.
> - so what happens if you don't have your phone at time of login?
You can’t login. Same as it is with any 2FA system where you don’t have access to the second factor.
> - if I enroll on iPhone, is my identity forever tied to Apple or can it be migrated to Android if I ever wanted to change platforms?
At a minimum services should support multiple authentication devices/tokens. So you can enrol both an iOS device and Android device, or any other FIDO device E.g. YubiKey.
This is already the standard approach for FIDO tokens, and basically a requirement for existing services, because we don’t currently have FIDO token syncing.
I would hope that these syncing services will also allow you to export your private key. But that’s a slightly scary prospect because it would allow the holder of that key to authenticate as you anywhere.
> - Can Apple/Google/Microsoft ever block/ban my account, preventing me from logging into my bank, etc that use FIDO login?
Services will still need a credential recovery process. People lose phone etc everyday. I imagine your bank will happy reset your credentials if you turned up in-person holding government identification.
>if I enroll on iPhone, is my identity forever tied to Apple or can it be migrated to Android if I ever wanted to change platforms?
A good FIDO implementation will give you the ability to enroll multiple authenticators. In fact, if you can't, you're basically going against the WebAuthn spec.
"Relying Parties SHOULD allow and encourage users to register multiple credentials to the same account. Relying Parties SHOULD make use of the excludeCredentials and user.id options to ensure that these different credentials are bound to different authenticators."
Basically, you should enroll your iPhone and a backup key. And if you get an Android device, you log in with the Android device using a backup key, and enroll the Android device and remove the iPhone. Alternatively, you remove the iPhone authentication using the iPhone, and enroll the Android device using an alternative authentication method (like traditional username/password).
> Can Apple/Google/Microsoft ever block/ban my account, preventing me from logging into my bank, etc
If you don't accept their 10,000 word ever-changing terms of use, and if don't let them check for the marks on your forehead or right hand, then yes, you won't be able to buy or sell.
> You technically should be able to migrate from one provider to another, it remains to be seen how easy Apple and Google will make the process.
On a UX level, the transfer to another syncing security key "provider" is going to be interesting, if they even do that at all - I kind of doubt they'll have a "transfer your iCloud passkeys to your Chrome password manager" and they'll instead say "go to each service and enroll a new security key via your new syncing key manager". On a technical level, I wholly imagine there'll be a tool that pulls iCloud Passkeys[0] via the MacOS Keychain application and then inserts them into your new key manager.
> - so what happens if you don't have your phone at time of login?
Depends on if they allow you to turn off password+2FA login entirely, which I only see being possible with something like Advanced Protection Program[0] which can already be used to enforce "Only allow authentication with my password+security keys; there is no way for Google Support to remove 2fa; if I lose the keys, the account's lost".
> - if I enroll on iPhone, is my identity forever tied to Apple or can it be migrated to Android if I ever wanted to change platforms?
I imagine they'll say "login to each website" (which you can do via iOS if you use qr android login[1]) then "re-enroll with your new provider", but I hope there will be an actual export/import or migration experience.
> - Can Apple/Google/Microsoft ever block/ban my account, preventing me from logging into my bank, etc that use FIDO login?
Assuming they don't change how Chrome and iCloud keychain currently works, everything synced should stay on your already signed-in devices, so hopefully you can continue to use your devices as authenticators until you can log into each service and register a regular, hardware key for sign-in.
1: https://www.chromestory.com/2021/12/qr-code-2fa/#:~:text=Her... I personally tried this with my iPhone, and my phone prompted me to use an iCloud Passkey. I was able to confirm that, by enrolling my iPhone as a security key on GitHub, then this 'BLE Webauthn' feature allowed me to sign in to GitHub on my desktop Chrome browser via my phone. Only downside to this is that the desktop must have a bluetooth card, but hopefully motherboards will continue to come integrated with wifi+bluetooth.
> - Can Apple/Google/Microsoft ever block/ban my account, preventing me from logging into my bank, etc that use FIDO login?
You don't need an Apple/Google/Microsoft account to use WebAuthn on another website, it's based on the biometrics on your local device. Syncing that credential across your devices with the same account is just an optional extra feature.
The biometrics don't authenticate you to the remote service. They authenticate you to the device that has the keys that authenticate you to the remote service.
Biometrics are a convenient replacement for a screen lock pattern/PIN, but not a necessary one, of course.
Fundamentally, if you want to support multiple, unlinked accounts per person you'll still need some sort of "account designator."
If you don't then the biometric marker can just replace both password and username. The reason why the username exists for the password is because it's problematic to guarantee uniqueness of passwords across your users. One is unique and public and the other is not and private.
Not it shouldn't. If you wanted to associate various devices (phone, laptop, etc) to the same account it wouldn't work. The fingerprint produced by each device is different.
You associate biometric credentials to a username for that.
The short answer is that you can lose a biometric but still be you. So a biometric is not a username.
Also, the biometric does not actually replace a service password in this instance, it just helps authenticate you locally to a device. The key on the device is what is actually replacing your password.
Depending on the device or settings you choose, you don’t need to use biometrics at all if you don’t want to.
I believe it's because people generally find the idea to be comfortable and familiar based on fictional representations in movies, etc. I'm of the opinion biometric information is totally private, yet easily spoof-able, thus should only be used to identity - not authorize - me.
it's not a dumb question, but that will not stop anyone because biometric authentication works very well in practice (convenient and foolproof) despite beeing not very secure.
i.e. lockpicking howtos and existence of glasscutters don't dissuade from having a locked front door.
> Bio-metrics are just convenient because they are unique and hard\impossible to replicate.
But if your biometric is able to be faked, you can't change it like you can change a typical text based password. There's no "reset your password" equivalent for biometrics.
Let's ignore the part about biometrics being faked since this seems to be a point of contention.
Isn't it a fair argument that secret keys should be mutable by the user? In the future, some unforeseen event COULD occur which compromises or otherwise renders the particular biometric unusable. Now what?
But they are... Firstly, with how it works. even if you use the same finger to generate hundreds of keys, they should all be different because we are using noise\randomness within the algorithm itself. different sensors will generate different outputs and therefore it is pointless to worry about the key used stolen.
I think what you want is secret keys completely detached from the user. we have that as well with hardware tokens.
Once they have a way to fake your biometric though they have it for forever, that's the point. With a password you have a way to provide a key only known to you and while it can be faked, it can also be reset, you can't reset your fingerprint without surgery
> What's easier to do? stealing someone's fingerprint or cracking\guessing their password.
> Definitely the latter.
You sure about that? A properly generated (i.e. random) password won't be cracked or guessed in any reasonable amount of time, whereas a model of your fingerprint(s) can be lifted from any object you've touched and used to create a silicone mold capable of fooling many fingerprint readers. And you only have 10 of them at best; once all your fingerprints are known to potential attackers that's it; you can't use fingerprint authentication any more for the rest of your life.
Really? can you back this up? I can. I work in the cyber industry for a decade now. I've seen the data, I've seen attempt to bypass both.
Biometrics are by far better for the vast vast majority of people.
Do you even listen to what you're describing here? trailing someone, trying to extract fingerprints? this isn't a Jame Bond movie.
Cyber attacks are common because they are completely digital\anonymous by nature.
Secondly, humans can't remember\generate truly secure passwords, unique for every account they own. they usually rely on a tool like a password manager.
PM are definitely better than weak passwords but are actually weaker than biometrics. they are a central point of failure and have been attacked in the past.
For the average Joe, biometrics are more secure since he is not using such tool anyways.
It doesn't take James Bond to lift some fingerprints off a surface. Anyone with physical proximity and a little practice can manage that much. People have managed to fool fingerprint readers with Gummi Bears before, much less specially-designed equipment. It's a practical attack, unlike attempting to brute-force a truly random 10-character password from a 78-character alphabet (uppercase, lowercase, digits, and half of the 32 symbols on a PC-104 keyboard).
> Secondly, humans can't remember\generate truly secure passwords, unique for every account they own. they usually rely on a tool like a password manager.
Which is perfectly fine. You aren't going to break their password manager either. The weak point is the users who aren't using password managers, because they try to get by with less-than-random passwords which are susceptible to cracking. Or biometrics, which aren't secret at all.
I mean you don't have to give it away if you think Google is storing databases of fingerprints for the lizard masters to track you down.
FIDO simply wants to make authentication stronger, you can use hardware keys that have a key burnt into them which is unique and much harder to brute-force than passwords.
Again according to how biometrics are described in whitepapers\industry, we extract features from the fingerprint\face sometimes very little compared to the actual biometric and use it to derive a key.
that key cannot be reversed to get the original features and different algorithms use different features.
> that key cannot be reversed to get the original features
"As a result, the early common belief among the biometrics community of templates irreversibility has been proven wrong. It is now an accepted fact that it is possible to reconstruct from an unprotected template a synthetic sample that matches the bona fide one."
-- Reversing the irreversible: A survey on inverse biometrics
"from an unprotected template" do you even read?
stop trying to find some random internet page to justify yourself, have you ever seen a biometric implementation? I have.
I don't know what counts as a non-random internet page, but here[0] is an article published by the "European Data Protection Supervisor" titled "14 Misunderstandings With Regard To Biometric Identification And Authentication", with number 12 being "Biometric information converted to a hash is not recoverable". It states:
> there are studies showing that the hash could be reversible, that is, it could be possible to obtain the original biometric pattern, especially if the secret of the key used to generate the hash is violated
So yes, there are secret keys involved (which the user has no control over), and no, I've never read through the code of a biometric implementation, but ultimately the space of possible values that someone's face or finger could reliably display is much smaller than even MD5, so it can be brute-forced.
If you have some non-random internet page to justify yourself, and show how much entropy is contained in a biometric hash, and how resistant to cracking that hash is, and how well secured those secret keys are, then I'd be happy to learn more.
Well it depends on how you define replicate, I'm not aware of a technology that can perfectly recreate someone's face\fingerprint.
a photo\mask isn't perfect and actually in some instances they fail to work vs sensors because of that.
It is more of a question of how robust is the authentication method.(can a photo\mask fool it? which can happen sometime but usually require pretty high quality sample)
So their vision of the future is that to do anything online, one MUST have a phone (ahem, portable wiretap)? And they're going to be keeping my secrets for me, for my own good?
It's literally the opposite. You "must" have a cryptographic device (a dongle) that is only doing that one thing, authentication. Doesn't have a built in radio (unless for NFC, if you want it), doesn't have any microphone or camera, doesn't store any data beyond what's needed to authenticate, doesn't communicate except to authenticate - bi-directionally, so phishing is no longer a thing, or at least it's a lot harder.
It's very hard to make a privacy case against FIDO. Practically speaking it's one of the best things that happened to privacy&security since the invention of asymmetric cryptography. The deployment of this tech reduces phishing effectiveness to near zero, or in many cases literally zero.
> It's very hard to make a privacy case against FIDO.
With username and password, I have full control over my privacy in a very easy to understand fashion: If I randomly generate them I know I cannot be tracked (as long as I ensure my browser doesn't allow it by other means).
With those keys I have a opaque piece of hardware which transfers an opaque set of data to each website I use and I have NO idea what data that is because I do not manually type it in. I need to trust the hardware.
Sure, I could read the standard, but it very likely is complex enough that it is impossible to understand and trust for someone who has no crypto background.
And I also have no guarantee that the hardware obeys the standard. It might violate it in a way which makes tracking possible. Which is rather likely, because why else would big tech companies push this if it didn't benefit them in some way?
> Which is rather likely, because why else would big tech companies push this if it didn't benefit them in some way?
They switched to this internally a long time ago which basically eliminated phishing attacks against employees. There are security teams inside those megacorps that have a general objective of reducing the number of account takeovers, and non trivial resources to accomplish that. Not everything is a conspiracy.
Also, I am sure you will be able to stick to just passwords for a pretty long time while the world moves on to cryptographic authentication. I'm not being sarcastic here.
Yes, they also track the behavior of their employees. It is security for them and not for the user in many cases. In a perfect world those incentives align but they don't have to.
With your password manager, you're trusting a lot more: the software of the OS and kernel, the software of the browser and its dependencies, the software of your password generator and your password storage. You also have to hope the developers and administrators of the website you're signing in to aren't storing your passwords in plain text (and I don't just mean in the database - overly-aggressive APM/logging might be storing POST request data in a log stream somewhere).
The only attack that's an issue for both passwords and security key-based sign-in is targeted attacks against a website, where they use your browser to execute malicious API calls to the website after you've signed in regularly.
I'm not familiar with FIDO, but passwords place a lot of effort into the user (must avoid repeating them, must avoid simple sequences, etc). After years of warnings, this has berely changed - people use lousy passwords and repeat them.
So I'm all up for considering different approaches.
No. Google's power to lock people out of their website is already here with the prevalence of 'Sign in with Google'.
FIDO is unrelated; it works by having the browser/device itself sync the virtual security keys[0], much in the same way they sync passwords currently. That's the only thing changing here, giving people the choice (and encouraging them) to sign in via "what you have" instead of "what you know".
I doubt they'll do away with tools like smart cards or Yubikeys any time soon. Laptops and modern computers also contains a TPM so you don't necessarily need to have a phone for secrets storage.
If push comes to shove, I'm sure someone will develop a lightweight Android emulation layer you can run in the cloud that pretends to be a phone enough that you can use it.
> Laptops and modern computers also contains a TPM
The root of trust for which extends to who knows where, and you're not allowed to look at the source code or learn how it works because that would threaten Hollywood's profit margins.
We're basically building a system of DRM for access to human beings, and making the whole world dependent on these unaccountable entities.
TPMs allow for arbitrary key storage by the operating system. They're not necessary for DRM. In fact, I've wiped my TPM several times to upgrade the firmware and I've had no trouble playing DRM content whatsoever.
Technologies like Intel's management engine and SGX or their AMD/Qualcom/Apple counterparts are definitely problematic for user freedom in the way they're implemented. However, the TPM system itself is quite neutral: usually, you can clear it from the UEFI, lock it with a password (though that might need to be done from the OS) leaving whatever hostile OS you may run unable to exert any control on the device whatsoever.
I'm personally a big fan of technologies like TPMs and secure boot as long as they're user configurable. I want to be able to install my own keys and force the system to only boot operating systems of my choice. Secure boot with just the MS keys is quite silly and ever since that one version of Grub could be exploited it's basically useless; secure boot with user or enterprise keys can be an incredible tool for defence in depth, for example when visiting countries where border agents may try to gain access to your data without your permission or knowledge (China, USA, etc.).
If I had my way, I'd use Coreboot together with Secure Boot, with encryption keys stored in a TPM, the transfer of which goes through an encrypted channel (a feature of TPM 2.0 that nobody uses) after unlocking it with a password. Sadly, most Linux OS developers have a grudge against these technologies because they're used by companies such as Microsoft and Apple to reduce user freedom on some of their devices.
The user-hostile part of the TPM is the built-in key signed by the manufacturer which shows that it's an "approved" TPM which won't—for example—release any of the keys stored inside to the device's owner. This is what allows the TPM to be used as part of a DRM scheme.
If it weren't for that small detail then I would agree that TPMs can be useful for secure key storage and the like, working for the device's owner and not against them. The actually useful (to the owner) parts of the TPM do not require the manufacturer's signature.
It enables it, but that's just because both you, the device user, and M$ and the rest of the media industry, need to ensure the TPM inside the processor is genuinely from the manufacturer. You wouldn't want to use a TPM if an attack vector is one where China (who is a large part of the supply chain) can poison a large amount of TPM shipments with their own key that can be used to export or otherwise access internally-stored keys.
If your threat model is "China has backdoored your TPM" then making the TPM more opaque and unauditable doesn't improve the situation. How would you know if your TPM is lying and pretending to still have the original key when actually it has a replacement Chinese one?
The actual attestation process protects against this:
program generates random bytes->ask tpm to sign it->on signature return, program asks TPM for its public key->program verifies public key matches that of the signature->verify the public key is cross-signed by the manufacturer's certificate authority. The only attack here would be if Intel or AMD's PKI is compromised, which would certainly be leveraged against enterprise customers before any consumer customers got hit.
With regard to supply-chain attacks, since the TPMs are manufactured in China, they can just make a perfectly "genuine" TPM with a valid, signed key which has their backdoor. The attestation process protects DRM users (media companies) from device owners. It doesn't protect device owners from TPM manufacturers.
As I said—manufactured in China. Both the government of mainland China and the government of the Republic of China (Taiwan) consider mainland China and Taiwan to be parts of the same country. They only differ with regard to who is in charge.
The issue could be addressed without removing the ability to attest as to the TPM's origin by including a protocol for the owner to dump the device's private encryption keys (e.g. by shorting one of the external pins to ground). The fixed attestation key set by the manufacturer would need to be restricted so that it can only be used to sign attestation messages, with all other keys being generated on the device so that they can be reset when the device changes owners.
Is there a way to list this blacklist? I have several computers which haven't received updates in years and I strongly doubt that the internal blacklist has been updated.
Which is a pretty big security threat that is constantly ignored. It just isn't acknowledged when people talk positively about TPM even if remote attestation is completely build-in by now. Security for whom becomes the question here.
My vision of future authentication (shared by colleagues in security) is based in strong hardware credentials and additional layer-7 context about identity, device and location. Basically, more identification of you and your browser using cryptographically-guaranteed and immutable events. It is actually the deprecation of passwords altogether and generally moving the trust boundary away from the control of the user entirely. I also don't enjoy it, but it would solve a lot of current problems we see in information security.
I don't know if you're being sarcastic, but your vision sounds like a nightmare and not very far removed from Gattaca.
> moving the trust boundary away from the control of the user entirely. I also don't enjoy it, but it would solve a lot of current problems we see in information security.
Every despot throughout history has noted that freedom can be traded for security, but I thought that most of us would agree that freedom is more important.
Society is replete with trade-offs sacrificing freedom for collective security. You can make moral judgements about this all day, but it won't change the dynamics of our lives.
Every technology is a double-edged sword. Like firearms, security controls can be used to guarantee peace and freedom or wage war and distress. The responsibility is with the administrator of that tool, not the tool itself.
Doesn’t require phone? Supported by desktop browsers also. Third party “auth managers” should be possible — likely integrated into existing password managers?
It would be nice for Amazon to commit as well. AWS has support for only a single Yubikey, which is mostly useless, unless you don't care about being locked out of your account if you lose that one key.
If you're copying passwords out of a password manager and pasting them into password fields, then yes, you're getting a significant improvement to phishing protection with a hardware key. If you're using the password manager's autofill feature, and that autofill feature is bug-free, then you're not getting any additional phishing protection.
Your passwords can still be stolen, however. Any hardware authentication mechanism is going to ensure that no matter how compromised your local machine is, the worst an attacker can do is steal one active session. They can't steal the secret required to initiate any future session.
If the password manager stores the requesting site with the secret, either manually or through TOFU, then it has an opportunity to provide better phishing protection than manual copying.
This is how Android Password Store [1] works, and it regularly triggers a phishing warning (that I have to override with multiple taps) when I'm trying it out by attempting to autofill a password for one app with the password associated with a different app ID.
Granted, I also use it with my Yubikey, because that's what holds the GPG decryption key.
First you need to invent a password manager that can be properly used? The one I have runs on my computer and trusts everything else I've ever installed not to have put in a mechanism to observe memory allocated to my browser.
The biggest attack is persistent login tokens that are stored on a device, eg. Discord has an issue with malware (disguised as DMs from random people asking "do you want to try out a beta for my game") that steals the login token from appdata, using it to purchase a bunch of gifted nitro and perpetuate the scam via that user's account.
I’m sure people way smarter than me have this figured out, from the Google post:
> When you sign into a website or app on your phone, you will simply unlock your phone — your account won’t need a password anymore.
> Instead, your phone will store a FIDO credential called a passkey which is used to unlock your online account. The passkey makes signing in far more secure, as it’s based on public key cryptography and is only shown to your online account when you unlock your phone
So if I was a dumb kid, I could login to my parents bank accounts (or more / worst) if my mom gave me her 4 digit phone password for games earlier?
If that kid can get their parent's finger on the fingerprint scanner, sure. The authentication part of the process is moved to the device's security system, so that's fingerprints, passcodes, and facial recognition.
I don’t think fingerprint scanners on consumer devices are always great. My daughter has one on her laptop and last week I tried my finger and it worked.
Honestly, biometrics are terrible for authorization. They're more of a username than a password and we shouldn't use them like passwords. The same is truth for facial recognition algorithms, no matter how advanced.
They're so damn convenient, though. I trust the fingerprint scanner on my phone and my laptop, but there are definitely bad scanners out there.
I don't want my password to be something I leave behind on everything I touch, which the police have because I was arrested once, which can be ascertained from high quality photos, and which I can't ever change once stolen.
I've tried unlocking my laptop's scanner with my other hand and I've asked other people to put their finger on it to see if it does some kind of weird matching based on finger type. No problems so far. It even works across both Windows and Linux if I use the right Windows reboot incantations.
Since there is nothing genetic about fingerprints, I'd personally consider your daughter's laptop to be defective if you're able to unlock it. A critical part of the laptop's security mechanisms is clearly broken and should be looked at. I can't find many other stories about Dell specifically so this may be a specific unit or product line that's broken.
You may even have something to gain by reporting it; I don't know if Dell or their manufacturers do bug bounties, but this definitely sounds like something that should be accepted in such a program. Even if they don't, writing a short blog about it with the brand, model, and model of the fingerprint reader might get the press rolling, forcing Dell to take action. This is simply unacceptable.
> Since there is nothing genetic about fingerprints
While it's true that even identical twins don't have the same fingerprints [1], it's not true that there are no genetic factors in the general shape of fingerprints [2]. I agree that it's unacceptable if a fingerprint reader isn't good enough to distinguish identical twins based on the differences in fingerprints though, as those should be the most similar fingerprints possible, essentially setting a floor on the minimum uniqueness in the problem.
It seems like they would use identical twin derived validation data sets to ensure this.
.. but biometrics can be lost too. I could lose my finger, I could have a facial injury. The algorithm could be changed and suddenly I can't log into anything anymore. Or I simply age and my faceId stops working some day.
I don't know but biometrics only sound smart initially but it seems very brittle if you think about it. Plus there are plenty of stories of people who were able to unlock somebody else's phone randomly. Just google "unlocked my friends phone via faceid".
This all seems like such theater for nothing. I think a simple "own this usb stick = it's proof that you are you" is a very nice 2 factor without any biometrics. Create a usb stick that needs to be unlocked via a passcode to work and voila.
* On Apple devices TouchID allows you to register multiple fingers. And if you have a severe facial injury it will fall back to a password if it can't identity you.
* No one is unlocking their friends phone via FaceID unless they are unconscious and they have deliberately disabled the awake-detection feature.
* It is not theatre for nothing. It is a far more secure and convenient form for authentication.
Currently on the iPhone, if your FaceID or TouchID fail repeatedly, you have the option to type in the passcode, which grants the same access. I'm not sure if the same is true on Android.
I think the more general point is that "able to unlock the phone" is not / should not be the same as "I have verified that this is you" for sensitive applications and information.
For you perhaps. Phones are shared more often than you think. And no, they don't use multi-account features built into modern mobile operating systems.
I hope this cross device system will be cross platform, but I wouldn't be surprised if you could only choose between macOS/iOS, Chrome/Chrome, or Edge/Edge sync.
Funnily enough, a system for signing web authentication requests from a mobile device is far from new: I've been using https://krypt.co/ for years (though it's on the long road of sunsetting right now) and I hope that will last long enough for the new cross device standard to replace it.
It won't, at least not in the short term. For that to happen trusted platform modules would need an api to export a private key wrapped with a certificate signed by (none/one/all/a quorum) of members in the circle of trust and itself. This will need standardizing. Only apple has implemented it so far because it has total control of their ecosystem. I think for Windows and Chrome to work like this, they'll need to start requiring TPM vendors to implement this in their drivers, but I can't see it being cross compatible with the API in the apple TPM any time soon, especially because the circle of trust is now as weak as the weakest TPM, and it's a reputation risk for apple if a credential gets compromised because some non-apple device trusted by the user in an apple circle of trust got breached
I think the first iteration of this system will definitely receive the synced key material in RAM.
It's possible that the TPM spec will be updated to allow for loading pre-encrypted data into the TPM store as a response to this. Alternatively, existing secure computing systems (SGX/TrustZone) can also be used to decrypt the synchronised key relatively securely.
TPMs don't generally store encrypted data (bar their master key)
instead they wrap/seal everything instead with a layer of crypto, then you can pass that wrapped object around as much as you want, only the TPM can unseal it
a TPM could easily be instructed to seal an internally generated secret with additional escrow keys for MS/Apple/...
that plus remote attestation could make it so you can never see the key in the clear
As far as my understanding goes this sealed secret is device specific and connected to the TPM master key. That would mean you could pass it around, but you'd need to have the blob on the device itself to actually use it.
The problem is that you need private/public key pairs that are synchronised across devices for FIDO to work properly cross-device. When you register an account on your phone, you need that account key on your desktop to use it there, and that's nearly impossible without some kind of key sharing mechanism.
Yes but what the OP is saying is that the TPM does not store the encrypted passkey, rather, the passkey is wrapped with this TPM's public key by another TPM that already trusts this TPM, so this TPM can import a passkey that's been wrapped with its own public key and store it unencrypted. See Apple's circle of trust: https://support.apple.com/guide/security/secure-keychain-syn...
I understand that, but that's not supported by any current standard as far as I know. We'll need a new TPM standard for this, which probably also means it will take years before every device supports this feature as modern computers can easily last five to seven years if you replace the batteries and don't cheap out. FIDO needs something that works now, or maybe tomorrow.
Agreed, and that's why I say in my original comment that I don't see it happening in the short term. If we had something that worked now or maybe tomorrow and was acceptable, it would simply be virtual authenticators; an authenticator implemented entirely in software. There's no practical reason why password managers like 1Password can't do that beyond attestation which nobody checks anyway. But in the end, I don't see the big three participating in sharing. The threat model changes so much that especially for Microsoft (in cell phones) and Google (in desktops) that means trusting an adversarial OS they have no control over
receiving synced key material in RAM significantly alters the threat model. Apple's current passkey implementation does not, at any point, handle unwrapped key material in the operating system. I expect all other implementations to follow.
I think this is really great news and am glad to see FIDO move forward as I think it greatly increases account security.
One aspect of FIDO that could still be troublesome is account recovery in case of inadvertent loss of passkey. OOB recovery with SMS or email is considered too weak and the main recommended alternatives are to maintain multiple authenticators (i.e. multiple copies of your passkeys), re-run onboarding processes for new users or just abandon the account.
It's going to be interesting to see how those alternatives play out in real world situations.
Reading this announcement, the idea seems to be that FIDO keys will be synchronised across devices. That means you can lose your phone and still get access to your accounts from your desktop.
You might even be able to get access by simply logging in to your Microsoft/Apple/Google account on a new device if they implement this system stupidly enough.
Yes, these will be stored in cloud storage like iCloud Keychain. But I can go into my iCloud Keychain and delete individual passkeys - or I may have only one Apple device and then lose it. Or some malware clears out all of my iCloud Keychain.
I haven't look deeply into passkey enough yet, but aren't we replacing "what if I lose by device" by "what if company XYZ decides to nuke my access to my synchronized passkey"?
Reading through the threads here: If the HN can't articulate FIDO and differences between it and the now decades old password model to each other, I think regular jack-offs are going to have trouble.
People have the mental model that their secret is stored in their gray matter/post-it note/password manager, and now you're telling them it's in their phone, and somewhat related to the phone's security model, or maybe a "yubikey", or behind biometrics, or maybe not, it depends, and Big Co. has copy, of something, and it's synced, and one possibly "migrates" between Big Co. and Big Co. might deliberately/accidentally disable all your websites, or losing your device/yubikey/piece of paper means you're screwed, possibly...
Yeah, well what I want is a (physical, literal) membership card like I have at the gym or library. I think "regular" people can learn to use USB tokens, and that they might make more intuitive sense than passwords. These places don't challenge me for the "secret password" when I come in, I just present or scan my card.
It's very tricky obviously, in terms of engineering and operations, for an internet based company to arrange anything similar. But I don't think it's too mentally foreign for the user (assuming we develop good standards).
So cards make sense to me. Way more sense than passwords. Maybe someone else feels more comfortable with the details living inside their phone, but that doesn't affect my mental model. Users don't need to understand or be taught the entire standard.
As you will have seen in lots of other posts to this topic, people want privacy and "I just show my membership ID everywhere, what's the problem?" unsurprisingly is not what they had in mind.
So, FIDO preserves privacy by minting unique credentials for each site where you use it. This is invisible to the user of course, for them it's just the case that you use your FIDO authenticator everywhere (that it works) and now it's secure.
I understand that. I was responding to the idea that hardware tokens like yubikey, in fact all alternatives to passwords, are too complicated for regular people to understand. And also saying that multiple options, to accommodate different people/scenarios, are fine and don't have to be complicated from the user's perspective. By way of analogy (admittedly I didn't make that very clear).
FIDO is an authentication standard. It doesn't care where your secrets are, it just mandates a way to use them to log in to websites. You can still use a password manager, it will just basically contain a single encryption key for all sites.
Client certificates have privacy concerns when used with ordinary web sites, as opposed to the API requests assumed in that blogpost. WebAuthn provides a tweaked featureset that addresses these concerns.
It's the classic key distribution problem. You'd have to get client certificates to people securely. It works for corporations because they can send you a device with the client cert pre-loaded.
Firstly, PKI generally doesn't have a key distribution problem because keys never get distributed; they get generated in-place, certificate signing requests are sent to certificate authorities, certificates are signed and returned.
Secondly, in the TOFU model that applies to WebAuthn, you don't even need to have a certificate authority - you can self-sign.
The problem is really, as alluded to another comment, that if you share a single certificate across multiple sites then you are sharing a common tracking id between them (e.g. your certificate fingerprint).
Logout is also a user experience pain point unless the certificates are stored on e.g. a smart card that can be removed.
Why aren't we doing more to validate the identity of the service we are trying to connect to? CAs don't allow me to establish my own personal web of trust. If I connect once to my bank in a method I deem safe, I should be able to store their credentials in an easy to validate way.
That way if I fall for a phishing attack, the browser can CLEARLY indicate to me that I'm encountering a new entity, not one I have an established relationship with.
Concurrently, OSes need to do a way better job of supporting two factor locally and out-of-the-box. To even use a yubikey personally you have to install their software and disable the existing login methods or else you can still login the original way you set up.
While we're at it, browsers and operating systems should actually lock out the second a key is no longer connected/in range. I know smart cards can behave similarly, but this needs to be grandparents level of easy to set up and control.
I would feel much safer with my elderly family having "car keys" to their PC.
They closest thing to avoiding being phished by a different "secure" entity is that your password manager will refuse to autofill (*) your credentials. But it's true that this is far from sufficient - this kind of autofill is wonky and doesn't work with all pages, so users can get conditioned to working around it by manually copying and pasting from the password manager to the browser, which defeats the protection. Many users prefer to always copy-and-paste anyway, because that avoids having to install the password manager's corresponding browser addon which can seem more secure.
(*): Note that "autofill" only means "automatically populate credentials", not "automatically populate credentials without any user interaction". Clicking the username field, choosing a credential from a dropdown that the password manager populated for you based on which credentials match the website in question, and then having it be applied is also "autofill".
You're right, that's woefully insufficient. The authentication challenge should clearly (using color and text) whether or not the challenge is an established part of your trust network and the hardware token should be able to validate the authenticity of the challenge modal itself.
Users should be able to take an action they trust, while at the same time having the choice of that action taken away (or made more cumbersome) if they are about to get themselves into trouble.
There are people far smarter than me working on these problems, but I feel like they are so hyperfocused on state-security that they refuse to listen to anyone regarding actual usability.
Specifically what's going on here in the cheapest FIDO devices is roughly this:
On every site where you enroll, random private keys are generated - this ensures you can't be tracked by the keys, your Facebook login and your GitHub login with WebAuthn are not related, so although if both accounts are named "ChikkaChiChi" there are no prizes for guessing it's the same person, WebAuthn does not help prove this.
A private key used to prove who you are to say example.com is not stored on the device, instead, it's actually encrypted using a symmetric key that is really your device's sole "identity" the thing that makes it different from the millions of others, and with a unique "Relying Party" ID or RPID which for WebAuthn is basically (SHA256 of) the DNS name using AEAD encryption mode, and then, sent to example.com during your enrolment, along with the associated public key and other data.
They can't decrypt it, in fact, they aren't formally told it's encrypted at all, they're just given this huge ID number for your enrolment, and from expensive devices (say, an iPhone) it might not be encrypted at all, it might just really be a huge randomly chosen ID number. Who knows? Not them. But even if they were 100% sure it was encrypted too bad, the only decryption key is baked inside your authenticator which they don't have.
What they do have is the public key, which means when you can prove you know that private key (by your device signing a message with it) you must be you. This "I'm (still) me" feature is deliberately all that cheap Security Keys do, out of the box, it's precisely enough to solve the authentication problem, with the minimum cost to privacy.
Now, when it's time to log in to example.com, they send back that huge ID. Your browser says OK, any Security Keys that are plugged in, I just got this enrolment ID, from example.com, who can use that to authenticate ? Each authenticator looks at the ID, and tries to decrypt it, knowing their symmetric key and the fact it's for example.com. AEAD mode means the result is either "OK" and the Private Key, which they can then use to sign the "I'm (still) me" proof for WebAuthn and sign you in, or "Bzzt wrong" with no further details and that authenticator tells the browser it didn't match so it must be some other authenticator.
This means, if you're actually at example.org instead of example.com the AEAD decryption would fail and your authenticator doesn't even know why this didn't work, as far as it knows, maybe you forgot to plug in the right authenticator? You not only don't send valid credentials for example.com to the wrong site, your devices don't even know what the valid credentials are because they couldn't decrypt the message unless it's the correct site.
Not a great thing to see the big three once again, driving the standards here. You should be worried.
But as long as the ridiculous SMS 2FA is removed or replaced by something better, then fine. But we'll see how this goes.
From the web side of this standard, this also tells me that Mozilla has no influence anywhere and will be the last ones to implement this standard in Firefox.
Mozilla is a member (https://fidoalliance.org/members/), so I doubt they'll be the left to their own devices. They'll probably lack the manpower to implement the additions well (I mean, you can't even paste a URL to an IPv6 address in Firefox for Android, which is one of the most basic features of a browser), but then again they already have Firefox Sync and a working WebAuthn system.
Both FIDO and W3C are neutral places for this. Mozilla being a part of W3C and FIDO will have a say. This is just a PR stunt that the tech journalists ate up. They’ve been working together to make this for the last two years, it’s just an announcement to accelerate the work on it.
The weakest link in security is always going to be humans. Account compromise is is more often a human problem than a technological one (spamming requests, password reuse, simple passwords, (spear) phishing, direct social engineering, etc).
If I'm understanding correctly, they're aiming to reduce multi-factor auth back down to a single factor that's "easier" than passwords. Easier to use. Easier to social engineer a compromise.
I get regular requests to get into my Microsoft account using their new login form that sends a key code rather than prompting for password. "Passwordless" just means that prompt goes to an app where a user unlocks their device to approve the login.
This seems like worse security, not better. I'm okay with an approval prompt if it's part of a multi-factor auth system. Not if it's the only auth.
> If I'm understanding correctly, they're aiming to reduce multi-factor auth back down to a single factor that's "easier" than passwords.
It isn't only easier, it's significantly more secure. FIDO/U2F is basically immune to phishing, because there's no one-time code to type and steal; there's a cryptographically backed signing assertion guaranteeing the person with physical possession of the token is in control. This is so airtight (because almost all account compromise is done remotely, not through physical in-person attacks) that I would even be personally comfortable disclosing my password for accounts secured by FIDO/U2F.
In a multi-factor scheme, I would agree with you. I use FIDO/U2F myself...as a secondary factor.
There are active attacks that attempt to exploit human lack of vigilance in an authentication approval flow. With a password as a first factor, it reduces the chances that these attempts make it to the user.
You and I are probably fine in terms of vigilance. If I see an auth request, say, from my Okta app, that I did not initiate, I know it's something I need to investigate and will not automatically approve it. But consider the typical user...
There's a frequent misconception that hardware keys are no better than, say, a TOTP seed on a secure element of your phone.
The core practical difference between a hardware key and that TOTP code on a secure element is the hardware key, when registered with a domain, is programmed with the domain name in it. Lookalike domains - or anything besides the exact domain you registered the key with - fail to 2FA because they are unregistered. This essentially prevents (spear)phishing attacks from stealing login credentials.
Absolutely right - put another way: the responsibility of the user is reduced from "be absolutely certain that you're entering your credentials to the web site that you think you're authenticating at" to "provide consent to authenticate".
But this seems like a technological solution to a very human problem. If I can trick a user into approving the login, then hardware fobs, secure elements, etc, are meaningless.
The audience here is likely to assume that the security is solid. And it probably is. But this is a technology targeting your average user. It'll certainly be easier for the end user. But it seems like it introduces a human-based attack vector that may be easier to exploit.
What I mean is: with modern hardware keys you literally can't trick a user into approving a remote login, or a login to a fake domain. It's not possible unless you can control the domain that the user is logging into (say you've got code execution on their machine or compromised their network and broken TLS, attacks which are significantly more complex than phishing). Hardware keys enforce that the device can only authenticate against the real domain, not a phishing domain. The core of this is a real improvement of how 2FA works at the protocol layer, rather than simply a change to how the user interacts with the device.
Hardware keys also require that the key can only authenticate a local session, so there's also no risk that your "hardware key tap" can be captured and used by a remote adversary who doesn't control the local computer.
If a user manually enters a code from TOTP device/calculator into a website, that TOTP device/calculator has no way to know which exact website domain it is - if the user visiting notmybank.com thinks they are visiting mybank.com, they'll get the right code for mybank.com from their device and get pwned.
The key part of FIDO protocol is that it prevents the user from getting the code intended from one domain and sending it to a different domain.
That's pretty much what happened here. Obviously it's going to look a bit different afterwards because you have to mathematically tangle the time, key, and domain together. You can't really do that with the six digits of a traditional OTP code.
And like the other reply stated, if you can't mathematically tie them together, you have to rely on the user validating the domain (which you can't).
But isn't the "thing" about FIDO (or maybe just security keys?) that the domain is also integrated into the challenge the client/key has to solve?
So from what I understand a attacker couldn't as easily fish me by pretending e.g. to be Google.
With a password or even a TOTP code the attacker could just pose as Google and forward the credentials to the actual site.
You're looking at an exploit from a technological point of view, which I expect this community is likely to do. Think of it from the perspective of the average user. I know for a fact if my mom was told by an attacker "if you see an approval request for your account, just accept it" she would do so. It's taken time to train her not to give anyone her password.
I've read of attackers with valid passwords spamming logins in hopes to trick a user into approving the auth. Whether it's because it woke up the user and they're in a sleep fog, or they're busy and not paying attention.
Microsoft, at some point, changed their login flow so that, by default, when you enter your username, it sends a pin. I receive regular attempts at this. This isn't going to work out for the attacker because they have to get the pin. But if all that's required is a button press, the attacker could just make the login request and wait.
With multi-factor auth, where a password is in use, you have to get past the password before getting to that auth approval. It reduces how much noise the user gets and the chances of success for the attacker.
You don't understand FIDO/webauthn/etc. The scenario you describe is impossible. This is the genius - the user is totally cut out of the equation, there is no action your mom can take on phishing-website.com to send the credentials of google.com, because the key will refuse to do so.
What this article is about is authenticating the request with an app on your phone, not a hardware key. This ends up being a device totally disconnected from the device requesting the auth, and neither have to be in the same geographic location unless implemented alongside the spec.
The article specifically discusses auth via app, but if it's involving the FIDO alliance, it'd be weird to exclude hardware keys, I guess. I still don't like the idea of going single factor, but if it's with a hardware key, I can see it being better than with an app since it has to directly interact with the process itself.
But, of course, if this is optional, I still have to reference the end users. I'm willing to pay for an authentic FIDO key, which can be a tad costly. Your typical user might be more inclined to go for a cheap one that does enough to get into the account, and may not be trustworthy, or would prefer not to do it at all.
My understanding is that the theoretical app being discussed behaves in the same way as a hardware key - it is simply a software-only implementation of the protocol (and thus comes with the same advantages).
That's why with webauthn humans are not part of the auth scheme anymore. All the auth is negotiated between machines(web browser -> domain name -> hardware key storage).
>"Passwordless" just means that prompt goes to an app where a user unlocks their device to approve the login.
Setup a yubikey with an attested cert/pub key. Require a pin to use said yubikey.Requiring attestation will prove that private key was generated on the device, and will only live on that yubikey. That's your best bet.
It also satisfies the multi-factor needs. The something you have is the yuibkey. The something you know is the PIN.
This passwordless signin process sounds neat, but will it increase Google’s power to lock people out of things? I don’t understand why Google doesn’t have an ombudsman - consumers have no recourse when Google locks them out, and it seems the consequences of Google locking you out are ever increasing. I think we’re going to need legislation to force Google to make a proper appeals process.
Google's power to lock people out of their website is already here with Oauth2.
This standard is unrelated; it works by having the browser/device itself sync the virtual security keys[0], much in the same way they sync passwords currently. That's the only thing changing here, giving people the choice (and encouraging them) to sign in via "what you have" instead of "what you know", but along with that they want to alleviate the UX concerns of people not being ready to carry around a separate physical security key.
> I don’t understand why Google doesn’t have an ombudsman - consumers have no recourse when Google locks them out
Coming soon to an EU country near you!
The EU digital markets act address this issue directly, by requiring “gatekeepers” to provide human customer response, and clear processes for appealing bans etc and generally forcing companies to provide something akin to due-process.
At some point (unless you're an Android dev) you have to accept a bit of responsibility. If you use Gmail, Drive and Android and then decide to use Android as your preferred implementation of FIDO (when YubiKeys ect, exist) I struggle to see how if you're locked out it's not partially the consumer's fault.
You choose to use Google, and can attest to the fact it's not that difficult not to.
For most people living in a western democracy, this is a pretty minor consideration to their threat model.
Most people default to what is easiest. Before TouchID, most iPhone users did not lock their phones with a password. Making biometrics readily available and default means more people are walking around with more secure devices than would be if we only encouraged people to use the absolute most secure options available.
The actual exchange with the server is using public key cryptography. How you unlock the key material locally could be a number of ways: PIN, password, fingerprint scan, voice recognition, etc
I think the main problem I’m never buying into Fido keys anymore is that mine point blank stopped working and I had to sweat to get back in website that supported it, hopefully back then not many, but if identity is the responsibility of a close piece of hardware if it breaks you’re out
The litigation on that matter is ongoing. What you said is not true right now. If you try to fight an order for your password, you'll wind up in court and probably lose, and then have to chose whether to act in contempt.
> Passcodes can therefore be compelled if their existence, possession and authentication are "foregone conclusions," the court said in the August 2020 ruling, determining the 5th Amendment's foregone conclusion exception applied in the case.
You can be compelled by the court to divulge passwords. It's one of those areas of interpretation of law and there's precedent against it as can be searched for.
For apple devices the keys are stored in a secure element. You need your password to access when booting, or after certain timeouts. Until then you can’t use faceid/touchid
Yes I get that but I think OIDC could be extended to cover that too whereas the Authenticator or iDP is the local face scanner kr other biometric and then the rest ie exchange of token etc stays the same. That way there won’t be two completely separate path and that will defeat the purpose of SSO. And it looks like there are already some implementation of this https://www.bioid.com/facial-recognition-app/
Basically, now we are entrusting the world's biggest companies to our logins with tight integration on OS level. What's worse is these keys will be stored in the "cloud", likely on proprietary server, using proprietary syncing.
Only way this is not shit is if somehow they allow third parties in, letting users choose a default "authenticater" app, like bitwarden. Otherwise nothing is fixed and everything is bullshit. Hope they get regulated to hell and back.
The FIDO Standard talked about here includes regular security keys, so, if you don't want to use passkey, you can get a physical security key; and while I imagine the push for passwordless will be large, I doubt they'll completely remove passwords anytime soon.
I fear services will force the use of certain devices, like those on the FIDO certified products list [0]. Will there be a way to use open hardware, open firmware, and user-controlled hardware attestation keys? Or will that be considered a fraud/bot risk?
Imagine a world where Apple, Google and Microsoft allowed users to use whatever identity provider they wanted.
You would select the provider you want, or be your own provider. Then you would be able to enable 2FA, use hardware keys and whatever else with ease and higher security.
No more passwords, as the provider basically replaces a password manager.
I wish something like OpenID will be the go to solution some day.
It's reasonably safe to leave them connected to the devices you regularly authenticate from, unless your threat model includes an adversary willing to use physical attacks.
Unless you have some way of authenticating all of your hardware with the key, taking it with you still leaves plenty of options for a physical attacker.
Everyone forgets that the majority of people don't use password managers let alone FIDO/WebAuthn and similar technologies. It will take really, really long time to replace passwords.
This is a good step forward, I just hope they work on ensuring the end user experience for less technical people doesn't seem more complicated than passwords.
Has anyone found a link to the new specification draft? I'm wondering if the spec for the authenticator will be open so that anyone could build their own
no thanks. I don't want one account that apple/google/whoever can revoke and ruin my online life. fuck that. I'll take my chances with 2FA and passwords. When it finally gets breached (if you haven't been cancelled!) imagine how much one online cracker will able to do. This also allows them unlimited access to follow you all around and see what you do, where you log in, etc.
FIDO2 privacy is actually pretty good and well thought out. There's a theoretical risk of a website sending authentication challenges for two different accounts and having both assertions signed by the same credential, basically correlating those accounts together, but this is unlikely to weaken collective privacy at scale.
Kind of. Yubikeys intentionally have a very small number of devices signed with one CA key and then they produce a new CA key, so those devices do have a basically unique identifier.
The point being, the FIDO Alliance reserves the right to blacklist any device that an attacker manages to extract the secret keys from, which has the consequence that 99,999 other people have their devices bricked.
Also, the Alliance could decide to blacklist a manufacturer just because they haven't implemented some new policy (like requiring a DNA scan of the user) so you better make sure that you buy a device from one of the "too big to fail" providers.
> The point being, the FIDO Alliance reserves the right to blacklist any device that an attacker manages to extract the secret keys from, which has the consequence that 99,999 other people have their devices bricked.
1. By what mechanism can they blacklist a device? A given relying party can choose to use or not use attestation and, if they choose to use it, which certificates to trust. But that's between you and the RP. Authentication doesn't "talk to" the FIDO Alliance--which is just a standards body and does not (AFAIK) even publish anything like a CRL for "bad" attestation keys, so I don't understand what you are talking about here.
2. The intention of the attestation, as I understand it, is to enable RPs to use attestation to limit authenticators to e.g. those that pass FIPS certification (or similar enterprisey requirements), not to ban a whole batch because one key is known to be compromised. That's crazy; can you point out where anyone other than you has ever proposed this?
3. DNA scan? What are you talking about?
4. This assertion you are making, while bizarre and wrong, is very different than the assertion the grandparent made ("Yubikeys intentionally have a very small number of devices signed with one CA key...so those devices do have a basically unique identifier"), which, while also wrong, is I think a genuine mistake and not a bad-faith argument.
> A given relying party can choose to use or not use attestation and, if they choose to use it, which certificates to trust.
True, and a website could decide to issue its own certificates rather than get one from a CA trusted by browsers, but in practice (and potentially one day by law) most sites will defer to the FIDO Alliance to determine which devices are "sufficiently secure".
> the FIDO Alliance--which is just a standards body and does not (AFAIK) even publish anything like a CRL for "bad" attestation keys
"The FIDO Alliance Metadata Service (MDS) is a centralized repository of the Metadata Statement that is used by the relying parties to validate authenticator attestation and prove the genuineness of the device model."[0]
> That's crazy; can you point out where anyone other than you has ever proposed this?
"If the private ECDAA attestation key sk of an authenticator has been leaked, it can be revoked by adding its value to a RogueList."[1]
> DNA scan? What are you talking about?
I picked a deliberately extreme example to make the point that there are requirements for these devices that users might not be happy with (but might not have any choice about, once the capability becomes ubiquitous). That specific example may never come to pass, but I don't think we should assume that allowing RPs to put arbitrary conditions on the hardware we use is a power that won't be abused.
For added context: "FIDO will soon be launching a biometric certification program that ensures biometrics correctly verify users. Both certifications show up as metadata about the authenticator, providing more information to enable services to establish stronger trust in the authenticators.)"[2]
> This assertion you are making, while bizarre and wrong ... a bad-faith argument.
> True, and a website could decide to issue its own certificates rather than get one from a CA trusted by browsers…
That’s quite different. In your example, if a website does so unilaterally, client user agents break. In the FIDO case, nobody else knows or cares which authenticators an RP trusts.
More broadly, I don’t get this conspiracy theory. You’re worried…the FIDO alliance will abuse their very limited power to…what end?
> If the private ECDAA attestation key sk of an authenticator has been leaked, it can be revoked by adding its value to a RogueList."[1]
The attestation key, which is shared among all devices? That’s rather different from what you said.
> That specific example may never come to pass, but I don't think we should assume that allowing RPs to put arbitrary conditions on the hardware we use is a power that won't be abused.
RPs already have such power. Today they use it to do things like require password complexity policies. Again, RPs aren’t the FIDO alliance; they’re the actual website you’re logging into.
Your repeated argument here is that websites should not be allowed to impose restrictions on how their users authenticate, which is hard to fathom.
In a previous version of this argument, I remember you essentially arguing that banks and enterprises should not be able to restrict what types of authenticators their employees and customers use.
I get it. You hate attestation. But “my employees must use a fips-certified key” (or “my customers must use a hardware key”) is reasonable and ultimately non-negotiable if you want people to use your protocol.
> But “my employees must use a fips-certified key” (or “my customers must use a hardware key”) is reasonable and ultimately non-negotiable if you want people to use your protocol.
I think this is the crux of where our disagreement lies. I grudgingly accept that FIDO makes it easier for companies to check that their employees are storing their keys on company-approved devices, but I don't think that arbitrary websites should be given the power to make demands about the hardware that visitors must use to create accounts. That seems like a worse position for user freedom than we have today with passwords.
You might say that websites already have this power, in some convoluted way. They could say "Enter your credit card details and postal address here and we'll send you a custom device you can use to log in to our website", but in practice no company does that. (Banks and governments are maybe special cases, and less concerning given that: their authenticators are managed out of band; they are highly regulated; they usually have actual branches that you can go to in person to sort things out; and people generally choose to interact with banks/governments that are based in their own country).
Attestation changes the market dynamics here. Suddenly it becomes acceptable for sites to bully users into buying certain types of devices, and for governments to start demanding that these devices be used as online IDs (at least for age verification, to start with). Even if companies don't abuse this power to keep people in their ecosystem (e.g. Apple sites giving you special features if you log in with an Apple device), the first casualties are going to be open source hardware and software implementations, which will be deemed insecure, and further normalise the idea that users can't go online without running proprietary code.
Yet Google, one of the key participants in the FIDO alliance, has published an open source firmware!
I agree the potential exists, in a hypothetical sense. But the dynamics are very different than you describe (with your analogy to the CA ecosystem, which, ironically, gives big platform owners far more power—yet has no evidence of such abuse!).
Right now, there is just not that much use of WebAuthn and FIDO. You’re the guy saying, “if we find a way to lower global temperatures, we should fear an ice age.” It’s premature to say the least.
I'm glad Google has published an open source firmware, and I hope that people will be able to independently verify that the hardware they use is genuinely running that firmware. Then I hope that hardware with such guarantees is not discriminated against by RPs.
The important difference with the CA ecosystem is that (in the worst case) the big platform owners can put pressure on small websites to obtain a certificate from one of a large number of competing issuers. Significantly, these issuers are not the same as the big OS providers themselves, and there are issuers who issue certificates for free. That is completely the reverse of 3 big platforms forcing end users to buy hardware, and those platforms being hardware vendors themselves.
> You’re the guy saying, “if we find a way to lower global temperatures, we should fear an ice age.”
No, I'm the frog saying "Hey, isn't this water getting a bit warm? Don't you think we should jump out before it's too late?"
But the big three can't do that, with FIDO. All they can do is influence the FIDO Alliance to add other SK manufacturers to the pseudo-CRL, which:
- is transparent
- is mediated by the FIDO Alliance; the platform makers cannot do it unilaterally, as they can with CAs in browsers
- is mediated by the RPs; even if the FIDO Alliance did do this for some reason, RPs could just ignore it with no ill effects, unlike with CA trust in browsers
- wouldn't have any effect today for the vast majority of RPs, since the vast majority do not even use attestation today
- honestly, isn't something they have any incentive to do; hardware security keys are not a meaningful source of revenue for someone like Apple, Microsoft, or Google
I'm guessing you've never worked in a big tech company before if you think they have an incentive to do that. :)
I haven't found any so far. Each account gets a new public/private key pair so accounts can't be traced back to each other. Usernames are optional and might even become a thing or the past, making username reuse less of an issue for linking accounts.
It all depends on the sync method provided. If synchronisation isn't end-to-end protected, you're handing Apple/Google/Microsoft the keys to the kingdom which is pretty bad.
No, each registration generates a new key pair; but maybe:
> The signature counter is a strictly monotonic counter and the intent is that a relying party can record the values and so notice if a private key has been duplicated, as the strictly-monotonic property will eventually be violated if multiple, independent copies of the key are used.
> There are numerous problems with this, however. Firstly, recall that CTAP1 tokens have very little state in order to keep costs down. Because of that, all tokens that I’m aware of have a single, global counter shared by all keys created by the device. [...] This means that the value and growth rate of the counter is a trackable signal that’s transmitted to all sites that the token is used to login with.
until your FIDO key sits in a apple/Google/MS cloud account that has been disabled because of some alleged tos violation and you can't extract it from your phone. No thanks.
I don't trust Google or Apple to be my main authentication provider, or to manage syncing my private key. Their customer service is terrible and they are way too arbitrary on locking folks out.
I would trust my bank (well, my credit union.) I can go see them in person if I need to and they take my lawyer seriously, they also take security seriously, they're properly regulated, and ultimately they're my main concern if someone stole my credentials, so I'd like them to be on the hook for protecting my credentials.
This announcement isn't about that and neither provider is asking to sync your private key. In fact the opposite is true: with FIDO2, you're in much greater control of your account security because authentication creds are now on a hardware token versus as bearer credentials you type and an adversary can steal and replay. Many of us believe we're very good at protecting our passwords, but this isn't true in reality and FIDO2/U2F standards objectively make accounts more secure precisely because they remove humans from the equation.
> neither provider is asking to sync your private key.
Yes, they are. According to the white paper linked in the press release:
Just like password managers do with passwords, the underlying OS platform will “sync” the cryptographic keys that belong to a FIDO credential from device to device.
Except it kind of is - the way I read this is "Apple/Google will turn your phone into a hardware FIDO token, but will use iCloud/whatever to reduce the huge painpoint of having more than one hardware token and keeping them all in sync"
I really love the idea of FIDO and making sure that my authenticator only authenticates to sites that I've approved, but having multiple keys right now is a huge pain, but I'm not excited about "just sign up for Apple and that pain goes away" because I sure as hell don't trust Apple not to cause me pain in the future.
This is a net benefit over synced passwords, which everyone already trusts them to do. You haven't been forced to use a (syncing) password manager over a physical password book in the past, and you won't be forced to use Passkeys[0] or the Android equivalent in the future; hardware security keys will still be usable since this announcement is about embracing the FIDO Standard.
Your average user is more concerned about losing their password than they are about authenticator sovereignty. Moving towards cryptographic primitives for auth versus shared secrets is a net benefit versus current state.
> but having multiple keys right now is a huge pain, but I'm not excited about "just sign up for Apple and that pain goes away" because I sure as hell don't trust Apple not to cause me pain in the future.
Compromise is necessary, and probably a bit of regulation from government to enforce good outcomes from exception handling. Passkeys need to be stored and managed somehow, and your average user does not want to do that, just like they don't want to run their own mail server, syncthing instance, or mastodon instance.
EDIT: (HN throttling, can't reply) @signal11 You can already be locked out of all of those accounts without recourse.
> Your average user is more concerned about losing their password than they are about authenticator sovereignty
Right up to the point when they’re locked out from their Google, iCloud or Facebook accounts with little recourse or appeal. And then they discover it’s not just Google, a whole host of other services don’t work.
And it does happen, and I for one don’t want to wait for legislation to mitigate this blatant attempt at yet more centralisation.
Many authenticator apps allow you to extract and back up the private key yourself, with no involvement of any 3rd party. But it's a totally optional workflow and you're never asked for that private key while authenticating, so the mass phishing and spear-phishing attacks seen with passwords are still infeasible.
This announcement is partially about the platform-integrated authenticators being made into 'virtual' authenticators backed by a platform vendor-specific cloud ecosystem. So for example, a credential registered on an iPhone may be synchronized over iCloud Keychain to work to log in my Mac via TouchID.
This is something which has always been as part of the model - an authenticator is just an abstract thing that represents an authentication factor, generates keys for a particular use, and doesn't share private keys outside its boundaries.
This announcement possibly marks a transition where sites supporting Web Authentication (with a bring-your-own-authenticator model) will go from seeing 90%+ hardware-bound authenticators to seeing 90%+ platform-integrated, synchronizing authenticators. Bundled into that prediction is a hope that this (and other proposed changes) will lead to a 10x increase in adoption.
Not at all, because before anybody could take your account away from you if you did not accurately compare two visual strings, potentially in Unicode.
By replacing that operation, which humans can not perform reliably, with computer operations, users are no longer subject to others taking control of their account.
Let me add my recent experience in the bucket. Few days ago I upgraded my legacy Workspace account to a business account. (I was in a time crunch; couldn't evaluate alternatives.) I enter my debit card details in the checkout and got a generic error message asking me to "try again later." Thought there was something wrong with their service and tried the next day. Same error. After some 15 minutes of searching forums, turns out debit card is not supported in my country on account of SMS based TOTP, which doesn't work for subscription services. (If they could mention it in the haystack of their help pages, why can't they say that right when I signup?)
Anyway, more searching led to an alternative. There's an option to request invoiced billing where I would get a monthly bill & pay - debit card works here. Clicking that option took me to form. Filled it, got a call from a sales guy few hours later. Sadly, he had no clue about my problem, despite being from my country. On top of that he told me he's from a different team and don't deal with sales queries (WTF. Then why did he call me?). Told me he'd email me some options and, at that point I wasn't hopeful. Thought he would send me some stuff I had already seen on their forums. On seeing the said email, my disappointment sank even lower. The generic mail had absolutely nothing to do with my issue and the help urls were totally unrelated.
I just ended up using my friend's credit card to complete the transaction. I'm seriously considering moving elsewhere.
Is product management this pathetic at Google? I'm sure if you went for a PM interview they'd judge you nine ways to Sunday. For what? Everything Google does seems like it's built by three robots in a trench coat collaborating unsuccessfully with other robots in trench coats.
I recently moved my family's legacy GSuite service over to Fastmail, and it seems like they've carefully planned for this exact scenario. Account setup on each device is as simple as downloading a configuration profile with a QR code. And Fastmail has a built-in option to authenticate to your old Google account and pull all your mail over to the new account, preserving all the details, and then keep sync'ing until you're ready to turn the old account off. I thought I was going to have to sync things myself. Nope! Took all of five minutes to set up my account and sync. Couple weeks later I deactivated the old GSuite accounts.
And now I'm a customer again, which feels good, even though it means spending actual money.
There is no comparison between Google and Apple customer support, and they should not be mentioned in the same sentence. Google support is nonexistent. With Apple, I can chat online or get in person support. They are more like a bank.
Despite what most people think, banks are often a really long way behind on security. Banks don't care about security of any individual customer, merely security of the bank as a whole. That means if 0.01% of customers lose all their funds due to credential stuffing, it isn't an issue - the bank will just refund them if needed.
Unlike say ssh with key authentication, where it would be a total failure if 0.01% of attackers were allowed to login without the key.
In the Netherlands the banks provide the iDIN system, so you can authenticate on more sites with the bank provided logins. Each bank has a slightly different system often using bank card and bank card readers and ways to authenticate through authorised banking app on individual mobile phones.
And besides that we have also a government provided login system which can also even work with your ID card. But mostly works with government systems and health insurance companies.
Tracking from bank or both? Anyways, in Latvia we have similar system and it is a convenient way to authenticate within services where you MUST prove you are person X.Y.Z.
For example, some electric company, if you auth via this method, will provide you with contracts, electricity usage graphics for all the sites you own and and other info you must access as a customer. Same goes for recycling company. These usually provide a way to register using email matching whatever email you had in contract (thus linking to real person anyway)
And then for other services where you request some data electronically that they must "register" each request. For example request some extended data on land/house ownership. You can't have that with non-real-life identifiable entity.
So usually login via bank is an login option with companies you either have juridical relationships or you must provide real life identity where you would otherwise have to show passport in real life.
We have GDPR and consumer focused regulators in the EU. Our governments are actually out to protect citizens from corporate malfeasances, as opposed to either ignoring it, or out right enabling it.
If a company abuses this data, you have strong forms of recourse available to you as a citizen, and banks are incentivised to remove bad actors, to ensure they don't become embroiled in enforcement action triggered by a 3rd party.
Suggested edit of mission statement in the name of increased accuracy:
“The standards developed by the FIDO Alliance and World Wide Web Consortium and being led in practice by these innovative companies is the type of forward-leaning thinking that will ultimately make the American people easier to track online.
This will be done by linking all online activity to unique personal attributes, i.e. "their fingerprint or face, or a device PIN." It's basically another step towards the China model of total mass surveillance of the population.
[edit: all the justifications for this proposal - aren't they mostly solved by the use of password managers?]
Widespread support for this feature is, in my opinion, the last thing needed to make WebAuthn viable as a complete replacement for passwords on the web.
The white paper is here: https://media.fidoalliance.org/wp-content/uploads/2022/03/Ho... Seems like they announced this back in March and I missed it somehow.
[1]: https://hn.algolia.com/?query=ajedi32%20webauthn&type=commen...