Just so you don't feel alone with the replies being of the typical variety, I'm 100% with you. The flaws in the "backup token" approach are rehashed constantly but the world keeps turning as though they're irrelevant.
I look forward to hardware tokens reaching a popularity level where we see implementations in software and this conversation can be rendered moot.
Shout out to Mozilla and Dan Stiner for their work so far.
I wrote that U2F implementation in software because I wanted phishing protection without needing to carry a hardware key. Well, and to learn Rust :) It's certainly a security trade-off to just store secrets in your keychain like I choose to, it is not meant to be a replacement for a hardware key and in fact I have a Yubikey I use when the situation calls for it.
I'd love to use TPM and biometrics to implement U2F/WebAuthn on Linux and have a proper, secure solution. Similar to what Apple has done with Touch ID. But that's no easy task. TPM support is poor on Linux and other options like relaying auth requests to your phone for approval and storing secrets in the Secure Enclave is no easier.
> relaying auth requests to your phone for approval and storing secrets in the Secure Enclave
Like the acquired/abandoned https://github.com/kryptco/kr [key stored in a [...] mobile app] with iOS and Android apps all under an "All Rights Reserved"-source license?
Also, newer Macs have a Secure Enclave (supports 256-bit secp256r1 ECC keys):
I'd love to have something equivalent for Linux, but given that requires hardware support I think relaying auth requests to your phone is the closest equivalent.
Software ("virtual") implementations are already possible in WebAuthn. It's up to the service whether to allow enrollment via a software authenticator; most services will want to allow this, seeing as it's still way more secure than ordinary username/password.
For web apps/services, the browser needs to be involved here too, right? (And maybe the OS?) How can I tell Chrome on my desktop to use my "software token" instead of Chrome looking for a hardware token over USB or finding it via NFC, so the remote service can ultimately interact with my (virtual) token?
(I don't even want to think about how to tell Mobile Safari on my iPhone how to find my key)
EDIT: My ideal setup, I think, is an app on my phone that I can use as my token - somehow signaling to my desktop/laptop that it's nearby and can be used as a token and ideally popping up a notice on the phone lock screen when there's an authentication request so I can quickly get to it. Then in my app, I'm free to export and backup my keys for all of the sites I'm enrolled with as I see fit. I know, I know, maybe being able to export the keys makes the setup less secure, but I will trust myself not to accidentally give the backup to a phishing site. (And I do worry that I'll accidentally get phished using a TOTP app, so I'd like to switch to FIDO, but I don't want the pain of multiple keys)
I do NOT want to use my phone. It cannot be considered to be a secure device given the 'network' baseband control chipset will never be owned by the phone's buyer and has full access to the device.
Storing your keys in secure hardware on a phone is almost certainly more secure than storing a key in software on your desktop hard drive.
If you don't trust your hardware, it's almost always game over. Desktops have devices running dubious firwmare as well, but at least with a hardware key store, the window of compromise ends at the time you update – a stolen key stays stolen forever.
For me the biggest anti-phone argument is that they break and that is very common. They also run out of power. Hardware keys offload this to something that can go through the washing machine or ride in monsoon rains on my motorbike's keychain.
This is an urban legend. It ticks all the boxes of people who are inclined to be paranoid about these sort of things (I realize saying that may come across as a value judgment: it isn't), so it remains a popular meme. But the "baseband controls the main phone" is a meme that was maybe true for mid 00s dumbphones but not modern smartphones.
That's not to say that you should trust modern smartphones. That's up to you. It's just that in whatever "trust" means to you, the baseband urban legend shouldn't come into the equation.
While I can't find the reference, I remember reading about how Apple set up their connection to the modem in a very particular way to where it has its own co-processor for any code it needs to run, and the bridge between it and the main SOC is just IO for rpc and network access.
Well, the baseband always was its own processor - that's why it's a separate thing called a baseband. What you're thinking of is an IOMMU, it's like a firewall that prevents coprocessors from reading all of RAM.
If there are vulnerabilities on the AP (main processor) then you can hack it from the baseband, but also from the bluetooth or WiFi chips.
Your phone is a whole lot more secure than your PC. It has to be, because you carry it around with you all day and it's easier to steal, so it has to be resistant to a lot more things.
Caveat emptor on cheaper/older phones or ones you enabled developer modes on.
Yes, either the Browser or the OS will need to be involved. For example, for Webauthn on Chrome on Windows: chrome receives the request, then it calls the Windows Hello APIs. Then, that Windows Hello API shows a popup to read a physical security key or authenticate a virtual security key via face/PIN (this is protected by a TPM, but it's "virtual" since Windows generates it via the TPM but stores it encrypted on-disk).
To support a syncing fido keyvault, Chrome could very well redirect the calls to its own popup for choosing to 'use Chrome', or 'use another key', which would then call the Windows Hello API. In fact, Chrome already supports this[0], with 'Add a new Android phone' is simply how they're presenting Webauthn over BLE, and it works with iOS when passkeys are enabled in the iOS developer menu[1].
A much more secure way of doing this is to use the platform's/OSs most secure way of storing private keys, which in many cases is hardware (Secure Enclave on iOS, TrustZone or "real" secure hardware like Titan M on Android, TPM on Windows/Linux).
This is already supported by many browsers (unfortunately Mozilla/Firefox are dragging their feet on this one [1]) and gives you exactly the user experience you want.
This does not solve the backup issue. It's effectively using the phone or computer as a whole as a hardware key, which introduces multiple failure modes compared to external hardware keys while also adding to privacy concerns. It might have some extremely niche use for some use of on-prem devices in enterprise settings where the inability to sever the authentication element from the actual hardware might be convenient; other than that, TPMs are essentially a misfeature given the existence of smartcards and hardware keys.
The backup issue is solved by using an external authenticator for initial provisioning of new devices.
In a compliant implementation, you can add a new external authenticator from an existing trusted device, and a new trusted device from an existing external authenticator.
> while also adding to privacy concerns.
What concerns are you thinking about here?
> TPMs are essentially a misfeature given the existence of smartcards and hardware keys.
TPMs are essentially built-in smartcards (with a few other optional features like measurements/attestation, but these have never really taken off as far as I know, other than giving TPMs the reputation they have) and are very well suited for use as platform authenticators.
> In a compliant implementation, you can add a new external authenticator from an existing trusted device, and a new trusted device from an existing external authenticator.
You can kinda sorta do this with WebAuthn if the service you're enrolling into allows for multiple authenticators (the spec recommends this, but some services don't allow more than one). But then you have to repeat that enrollment step with all devices, for every new service you sign up to. Which is practically useless because an actual backup is supposed to be stored in a safe place that might be hard to get to.
> TPMs are essentially built-in smartcards
The question is why anyone sensible would want to have a smartcard built into their computing device. The only uses I can think for it are nefarious, i.e. allowing outside services to track the user and violate their privacy.
To securely store device-specific authentication credentials such as WebAuthN those used by WebAuthN/FIDO, for example.
> The only uses I can think for it are nefarious, i.e. allowing outside services to track the user and violate their privacy.
A smartcard would be one of the worst or at least most complicated ways to implement tracking: It can communicate with the rest of the system only through an extremely limited interface and can strictly only ever answer requests sent by the host, never initiate requests on its own.
To do anything nefarious, it would need a privileged companion service on your computer – which doesn't gain anything from being able to talk to the smartcard.
As an aside: Even TPMs are an extremely passive technology. The only thing that arguably makes them "evil" is the fact that they can perform measurements for device attestation, but it can still never transmit these on its own. That evil is pretty indirect, in that some service providers might only allow users to use TPM-enabled and sufficiently attested clients to access their services, and exclude open hardware and software.
That's coincidentally exactly what DRM is, and it's already here, and not at all limited to TPMs. I'm cautiously optimistic though that it's possible to strike a compromise and limit attestation to properly sandboxed parts of the system, e.g. only the parts of the GPU relevant to display copyrighted movies, without getting undue access to the rest of the system.
The smartcard part of TPMs is about as capable of evil (as far as your computer and your data on it is concerned) as a USB-connected mug warmer.
You can use the "smartcard part" of a TPM. This gives you secure/non-extractable key storage.
You can use the attestation/trusted computing part of a TPM. This gives you trusted computing, which can be used for DRM, if you install software or use a service using DRM and grant it access to your system. If you don't like that, just don't do that. (Today's DRM solutions don't even use TPMs anymore, for what it's worth.)
If everyone were forced to use TPM it probably would still be used as a DRM mechanism. My problem is with enabling the usage in the first place whereas I only have negligible security improvements.
The only think that kept DRM from leveraging it was indeed the low usage in consumer spaces.
This is such a restrictive security model though. Sure, devices are already identifiable. That is a security issue in my opinion. Yes, authentication is one use case where it is actually beneficial. But the security threats from this are far greater in my opinion even if you include phishing. Privacy is a concern for users even if it is conveniently ignored here.
> Your privacy is important to us.
Privacy statement of the group and I think it is a straight lie. This is a major if not the major security concern. TPM didn't address the problem and therefore isn't a too popular guest.
I look forward to hardware tokens reaching a popularity level where we see implementations in software and this conversation can be rendered moot.
Shout out to Mozilla and Dan Stiner for their work so far.