Historically, it's bad news when big corporations get involved with standardizing anything security related. It generally means it's going to be complicated, convoluted, and hopelessly tied to proprietary stuff you can only get from these big corporations. Interoperability is almost guaranteed to not be a thing.
Just look at how the big FANG companies managed to adopt OpenID and yet made it so that it can't be used to authenticate users between them. They never liked identity federation because it would mean not controlling the user's identity completely. Google loves identity standards as long as Google is the one and only party doing the authenticating. Same with Microsoft. Apple for sure is not going to let anyone do anything that matters on their platforms.
That's why people still use simple email/password as a way to authenticate. It's just too much of a pain to implement anything else for independent web developers. Take fido/webauthn for example, which FIDO standardized and which then got restandardized/reimagined by the W3C as webauthn. That stuff was delivered years ago in a half-assed way. Pretty much the only widely used thing in this space is yubikey. The only people I know that actually use that or know about it are developers.
Nobody ever did the obvious thing of hooking this stuff up to biometric readers deployed by the billions in the form of phones using either photo recognition or finger print readers, laptop finger print readers, etc. Such an obvious thing to do. Never happened. What are Google and Apple waiting for? What's the holdup? Do they lack the imagination or is there another reason that perfectly sane proposals from engineers they employ to do something like that consistently get shot down by management? Which is probably exactly what happened. Over and over again.
They fundamentally don't like security standards that reduce their grip on user identity and "owning" that.
You can try it currently - set an iOS device as a development target in Xcode, go to the iOS developer menu (settings->developer) and turn on the passkey option[0]. Then, on a desktop or laptop with Bluetooth and in a Chromium-based browser, attempt to enroll a security key on a website that supports Webauthn - Chrome will popup the prompt to "add a new Android device", which will then show a QR code that you can scan on your iOS device. It'll prompt you to enroll a security key with the text about "save a passkey for this account in iCloud Keychain"[1]. In reality, all it does is create and sync fresh security key private keys for use across devices, with interoperability over BLE so it's easier to use them even if you don't have an expensive iMac.
Nobody ever did the obvious thing of hooking this stuff up to biometric readers deployed by the billions in the form of phones using either photo recognition or finger print readers, laptop finger print readers, etc. Such an obvious thing to do. Never happened. What are Google and Apple waiting for?
On Apple platforms, you can use webauthn with Touch ID and Face ID in Safari. For some reason it doesn't work on all sites (probably because they implement the older FIDO standard, not webauthn?).
These passkey FIDO credentials mentioned on the linked post are also activated with Touch ID on macOS. I have passkey configured on one or two websites and I authanticate with Touch ID. (And still a password, but it's the first step.)
> For some reason it doesn't work on all sites (probably because they implement the older FIDO standard, not webauthn?).
Because Apple decided to improvise and added some broken and nonstandard "user interaction" requirement to Webauthn. With a whitelist for the big players.
It's not idiotic. It's to prevent automatic sign-ins, sign-in requests etc. especially from hidden iframes.
"User must interact with page for features to become active" is a very common way to deal with quite a few things in various browsers, not just Safari.
1. It's nonstandard, we need no more of that on the web. Also poorly documented. Also buggy.
2. There are way better methods, it could've been something akin to Media Engagement Index.
3. There's no "allow bypass" option. Wasting people's time unconditionally is idiotic.
4. This check applies to both Webauthn password-ed and passwordless with or without user presence. It makes no sense that the user must click a button to consent to being able to verify presence and then enter a PIN.
5. User gesture does not carry over to contexts where it's often needed. It breaks subpages like most login flows have, it does not carry properly between various JS callbacks, it does not carry over between multiple sites (for example to an SSO solution), it does not allow an easy way to reattempt authentication when something errors. It's so cumbersome and breaks so much without proper alternatives!
> It's nonstandard, we need no more of that on the web
Because sometimes it's not for standards to define how an app behaves in certain situations.
> There are way better methods, it could've been something akin to Media Engagement Index.
Do read about Media Engagement Index: https://developer.chrome.com/blog/autoplay/ Even MEI is basically "if user engaged with content on the site, then we allow certain things". And yes, it's also non-standard, and buggy, and poorly documented
The rest of complaints are about the complexities of figuring out user intent in the context of the browser. Yes, yes, it's extremely hard, and lies at the intersection of malicious actors, clueless users, and browsers trying to figure out what to do in the whole spectre in between them.
It could also be that the standard itself is badly designed (I don't know, I never read it), and didn't account for the complexities of modern web.
> Large websites got an exception
The comment above you stated "There's no whitelist AFAIK". you ignored that part, so I'm not sure where the "exceptions/whitelist" info comes from. If there's a whitelist, then yeah, it's a bad solution and shouldn't be there.
> The comment above you stated "There's no whitelist AFAIK".
FYI I was talking about WebAuthn user verification there; not Safari's requirement for a DOM user gesture. I have no idea how Safari implements their user gesture requirement or if there's a whitelist involved anywhere in that process.
Yes, I was referring to WebAuthn user verification. @Avamander claims there's a whitelist, but I couldn't find any proof of that through quick googling
I'm not talking about Webauthn's standardized user presence checking, certainly not.
The whitelist can be seen in Webkit's source when searching for "shouldBypassUserGestureRequirementForWebAuthn" or any of the whitelisted domains: dropbox.com, microsoft.com, google.com, twitter.com or facebook.com
I am saying from very practical experience this is not well made and shouldn't have been shipped to users in its current form.
There are better examples how to avoid users getting spammed with any requests, browsers have a long history of dealing with that kind of abuse much better.
This patch loosens the user gesture requirement around using WebAuthn with respect to user gestures by removing the Quirks.h allowlist of sites that get a freebie.
Instead the new behavior is all sites get one freebie, then on subsequent attempts they show a non-modal consent dialog.
--- end quote ---
> There are better examples how to avoid users getting spammed with any requests, browsers have a long history of dealing with that kind of abuse much better.
They really don't have much better solutions than requiring user interaction. Even Media Engagement Index that you mentioned is used by Chrome only on desktop and by calculating user interaction.
"Even adoption af standart password managers is painfully slow."
I am reluctant to use password managers on my phone because of the risk of my passwords being exfiltrated either by the app developer, by the OS manufacturer, or by whomever manages to hack in to my phone.
The attack surfaces for desktops, even Linux desktops, is incredibly large in comparison to the phone. The only difference is that your concern of "the OS manufacturer" hacking your phone are lessened on Linux, assuming you're auditing all packages you download or are using a permission-based OS like Nix.
The FIDO2 spec is written by someone who has no idea how web apis work. Lots of it is based on CBOR instead of JSON which makes it a PITA to handle in JS code (yes, there is a cbor npm library but why… WHY?!?).
Negative numbers are encoded in such a way that can express a number than can’t fit into an int64. Undefined can be encoded. Floating point numbers are sent raw, so Infinity, -Infinity, -0, Signalling NaN, other NaN values can be encoded. A horrific choice for a security related API - a security API should never add unnecessary corner cases.
Type 1 encodes negative integers, with a value of −1−count, for values from −2^64 to −1.
The values 20–23 are used to encode the special values false, true, null, and undefined
Short counts of 25, 26 or 27 indicate a following extended count field is to be interpreted as a (big-endian) 16-, 32- or 64-bit IEEE floating point value
Edit: oh god no. https://www.rfc-editor.org/rfc/rfc8949.html Epoch-Based Date/Time allows 5 different integer or floating point representations (what to do with NaN numbers unspecified, so t+Infinity and beyond are representable). Tag number 36 is for MIME messages. Read the rest of the RFC for a truely hideous kitchen-sink specification.
I agree that CBOR is unnecessarily shoe horned into the spec, but it honestly isn't that bad. IMO there are two other problems that need to be resolved for webauthn to go mainstream. The first is the fact that there is no mechanism for changing/updating the url tied to a device for a website. This is intentional for phishing reasons, but if you want to rebrand or are acquired you're going to run into problems. The second is the frontend JS API is a bit awkward to implement.
> The first is the fact that there is no mechanism for changing/updating the url tied to a device for a website. This is intentional for phishing reasons, but if you want to rebrand or are acquired you're going to run into problems.
You can't solve this "problem". The RPID is bound into the actual credentials, it isn't that there's no mechanism provided to change it, but rather that it is part of the credentials themselves.
If you want actionable advice for people whose big boss has determined that they absolutely must change from foo.example to bar.example, they need to use the foo.example site to accept the credentials and forward users to bar.example and then re-enrol them to the "new" bar.example account with new credentials. Users will probably be annoyed about this hassle, but you can't do anything about that it was sealed in with the decision to get rid of foo.example.
If your feedback is "But that's user hostile" yeah, so, don't change from foo.example to bar.example. What seems so very important to you is just another pointless annoyance for your users. Remember, you don't need to put the foo.example name on employee badges or the corporate logo to keep it, you only need to keep the name and use it to do the auth step.
The same "Re-brand at all costs" thinking destroys users bookmarks, email address books, word-of-mouth, - destroys so much intangible value, and yet you can bet that the executives who decided upon it will argue that since they didn't measure that value they're destroying it doesn't exist.
WebAuthn is a web protocol. FIDO however is a device protocol.
On the Web, ASCII human readable JSON is a great choice. On a cheap USB device ASCII human readable JSON sounds like a lot of extra work for no reward.
The device doesn't need to implement most of CBOR and if all you use it for is WebAuthn you don't either.
> Just look at how the big FANG companies managed to adopt OpenID and yet made it so that it can't be used to authenticate users between them. They never liked identity federation because it would mean not controlling the user's identity completely. Google loves identity standards as long as Google is the one and only party doing the authenticating. Same with Microsoft. Apple for sure is not going to let anyone do anything that matters on their platforms.
actually OpenID Connect 1.0 - the thing that most companies use (like google, apple, microsoft) actually does not yet support Federation. OpenID Connect is basically a little bit more than OAuth2 (which was never about Federation)
the original OpenID Authentication spec was about federatation, but barley nobody implemented it. (I can only think about StackOverflow (which dropped it in 2018) and something at RedHat/Fedora - can't remember?)
The device tends to be tied to an underlying "cloud identity" (apple id, google account, microsoft account etc).
You can use that account to re-enrol / manage devices etc.
Ideally people have more than 1 device enrolled to enable easy fallback if one is lost. However, if the user loses access to their primary / cloud account for any reason, they will have a bad day.
A number of these FIDO posts are cropping up and I still don't really get it. Someone in a previous one explained the benefits over and above my current password manager set up.
However you have articulated the single biggest problem with this, and that is you are not in control of this thing.
I'm activity not tying anything to a cloud identity, ever. And I'm not having multiple devices or multiple cloud identities for when Google's et. al. automated AI type system decides I'm not getting access to my stuff.
It seems to me a centralised single point of weakness open to exploitation from the multi national corps.
Fundamentally, there should be no need for a "cloud identity".
I have a couple Yubikeys, for example, and I can treat them like keys to my house. Somewhat different, in that I cannot make a copy as directly, but that's honestly a good thing. I just have most of my applications set up to accept at least two of these keys, and I store one as backup generally along with a file-based key as backup somewhere secure.
This is essentially what the cloud services should have on your account to "re-enroll" you, just self-managed.
I want to get to the point where I can treat my physical keychain's USB/NFC (though, call me old school, I really like the direct physical connection) key much like a car key. Even automatically locking my laptop when I remove it (or get too far away).
Agree on fundamentals - after all that's how it works now, you don't need FAANG for this.
I do suspect that until it works "by default" for most users (without requiring any explicit configuration, backup or maintenance) then adoption will remain low.
> I do suspect that until it works "by default" for most users (without requiring any explicit configuration, backup or maintenance) then adoption will remain low.
That is exactly what these recent announcements from the big platforms have been about.
Yep. That's why I always hope for layers of standards. Lower level standards to enable higher level standards eventually all the way up to your gramps.
Part of it is about avoiding the possibility of screwing up by eg picking poor passwords or giving your password to the wrong site or allowing replay attacks (though I guess tls mostly protects from that).
You don’t need to rely on cloud backups, though many users will. It is also usually possible to set up things like account recovery codes (which you print out and put somewhere safe) or TOTP auth (more useful if you eg aren’t able to carry a yubikey everywhere with you / can’t plug it into your phone).
> A number of these FIDO posts are cropping up and I still don't really get it. Someone in a previous one explained the benefits over and above my current password manager set up.
This post and the last one[0] are the same exact statement, this one was just posted again.
> It seems to me a centralised single point of weakness open to exploitation from the multi national corps.
IMO this is mostly a misunderstanding, the proposal is a commitment to implementing a host of features to make regular users actually consider using security keys:
- Apple and Google are implementing a syncing security key service, likely on top of their existing password manager infrastructure [iCloud Keychain[1,2] on Apple devices]
- They are also committing to wireless Webauthn-over-BLE technology, so you can do things like scanning a QR Code on your Windows desktop to sign in with a security key stored in your iCloud Keychain
Otherwise, these concerns of control have existed for nearly a decade with Sign in with Google[3]. Now they're making it more secure by not having a third-party as the actual broker between the user and the service provider, they just assist in the UX department.
Obviously, Google wont care and people will lose their fortunes. Some of them will come to hacker news and grouse that they have been treated unfairly. Government will remain silent because they can easily spy on people from data obtained from "big" techs.
Yubikeys are awesome. I have a copy of a signing only PGP key on a Yubikey and I use it to sign all of my git commits. Easy and very secure. Cheap too.
> Nobody ever did the obvious thing of hooking this stuff up to biometric readers deployed by the billions in the form of phones using either photo recognition or finger print readers, laptop finger print readers, etc.
The last thing I want to expose to some random shady internet site is my photo or fingerprint.
If it's FIDO2, then you don't actually expose biometrics to the site. The verification is done on device and unlocks a key saved in the TPM/secure enclave.
Of course, once Face ID/Windows Hello logins on the web become a thing, web sites may trick users into enabling their regular camera instead, for whatever reason.
Just so it’s extra clear, once the key on your Secure Enclave is unlocked, it then decrypts the repository of passkeys on your drive and then sends that to connect to the service.
No solution on a PC or non-Apple laptops offers this. Apple and Android devices can because they have secure enclaves with the capability to do trusted analysis and attestation of fingerprints and photos, but no one else has the required technology and the required secure communication channels between the involved components.
You’re wrong. Multiple vendors have what Apple calls a Secure Enclave. On PC, it’s called the Trusted Platform Module and somehow the whole PC crowd has decided that thing is unequivocally the road to fascism.
The HN crowd hates passwordless sign ins but most of us actually know how to keep our passwords secure, at least relative to the average user. Non-techies have their passwords leaked and identities stolen time and time again. And get frustrated when they keep forgetting their password. Biometric passwords are incredibly useful for these people.
Of course Apple, Google, Microsoft have to keep non-biometric password authentication. Otherwise a lot of people (including me) will have to simply stop using their products, not even because of privacy, just because I doubt a Linux computer without a camera or old iPhone model will work for them.
Some argue that users without biometric authentication will be flagged suspicious and pressured to hand over biometric data. But honestly, I think so many people will stick to regular passwords, and more will be physically unable to use biometric passwords, so it won’t work.
Overall I think having the option to use biometric authentication is a net positive. Companies want them to get user data, but non-technical people want them because (when done right) they’re a lot easier and more secure than passwords.
Passwords are extremely counterintuitive and require understanding of the architecture of the systems to make any sense of it. Why do you need a password to your phone and then why do you need a password for the apps and websites? Why do you need an SMS and why do you need a code generator? Why do you sign in into your Google account to sign in into Stackexchange? None of that makes unless you understand the distinctions between systems and all that happens when we race to make the experience seamless as if there's no distinctions. It's getting so messy that phishing becomes hard to detect even for the more experienced people, you simply follow the instructions of typing codes and clicking links.
It only makes sense to have a device that can identify you personally and then use that device to get identified to other systems. That's actually why anonymous accounts and fingerprinting are very effective in tracking people(remember that companies like FB can identify you without you ever creating an account with them).
Passwords only use is account sharing IMHO. Once everyone goes passwords-less, sharing a Netflix account will be a thing of the past for example. Maybe this is one of the motivations of the big tech but overall I think it's for the greater good and sharing a service will be served better through explicit means of sharing the service.
This sounds dystopian and also really bad engineering.
First, I don't want a single login for everywhere. For obvious privacy reasons, using multiple disjoint accounts is preferable.
Second, using a device to identify someone creates a single point of failure. That device will get stolen or will break at the most inconvenient time. The battery will run out. Or something else will happen.
This is far from theoretical. My phone died on a business trip. It turned into a nightmare because almost everything needs two factor authentication. It's really bad not being able to spend money you have. That is when I learned the value of cash.
The obsession with passwordless logins and 2FA is anti-consumer. It needs to stop.
> For obvious privacy reasons, using multiple disjoint accounts is preferable.
Actually, the reasons are not as obvious as one might think and not as preferable as assumed. Disjoint anonymous accounts are one of the core reason for all the problems we have on the internet(there are much more spammers and manipulators than there are whistleblowers).
I'm sure you are exposing corruption all the time or you are doing gay stuff in Iran but the majority of the anonymous activity is spamming, trolling or political manipulation.
If you say why you need anonymous account we might work out a solution for you.
I don't know why you think I'm not serious because trolls spam and political manipulation is a thing on the internet these days, Google it! Care to answer and further explain your arguments?
Because people tell you their name if they want to, and give you their thoughts if they choose to do so. Having your name available to all at any time and anything you've ever said regardless of context or without limits to the intended audience is madness. It's a freedom we should not be willing to give up so easily because of technology.
On top of that, having all that personal information available will actually increase the power of trolls and political manipulation, not to mention exacerbate identity theft, profiling, security and phishing problems.
You don't have to give your name to anyone. Single sign in implies a single person but doesn't need to have any more information on that person and even if it has the info doesn't have to share it with anyone.
It's really serious? Wow! Not even Microsoft TPMs write such nonsense.
Forcing people to have a single ID across the web is a terrible idea. Having multiple disjoint accounts makes it much more difficult to track your activity online. It's not worth killing privacy just to stop a couple of annoying spammers.
Of course Google/Facebook/Microsoft love the idea because it makes tracking user behaviour much more easier.
How does Facebook have this magical power for which you provide no explanation? Of course, there is no such magical power. The trick works if you allow it, that is unless you block it with technical means. After all, you are still in control of your computer, so if you block the information they use to track you, they won't be able to track you. Thus, it is not sufficient reason to force the implementation of a dystopian surveillance machine.
No, no magical tricks. They do device and browser fingerprinting, essentially they can detect you across the internet and in real life through your device networks, device identifiers like MAC address, device specs and configuration.
All of these information sources can be sanitized so that the entropy they give off is reduced. See efforts by the Tor Browser and Firefox on reducing the effect of fingerprinting. Additionally, your MAC address is not exposed over the internet, just to the next hop.
I'll repeat what I said:
> Thus, it is not sufficient reason to force the implementation of a dystopian surveillance machine.
See, this behavior is absurd in real life and I should have been able to avoid speaking with you so I don't waste my time. The anonymity gives you a shield against it and I don't think this is something healthy. Why I'm being called a troll for pushing an argument? Because of people like you, we can't have healthy arguments anymore because calling someone names without repercussions has become the norm even beyond kindergarten.
You get a replacement if lost or broken. Then re-authenticate and restore from a backup. That's how it's done with iPhones.
And about the police, you destroy or lock your device. If that's not %100 secure, it's OK because that's part of the risks about going against the authority. Can you imagine revolutionaries not revolting because the police might hit back? The risk of being caught by the police is part of the outlaw experience and you will need to actively try to avoid it.
Who's providing the back up? How do you re-authenticate if you lost the thing you use to authenticate with?
It's easy saying it but I don't understand how you do those things. If the answer is you need a password to access the backup and to access an account to re-authenticate, I'm not enjoying the irony. If the answer is to trust a company like Apple, Google, Microsoft with the literal keys to my kingdom then this thing misses the point.
Currently I'm in control of my authentication, why would I hand that to some company and hope I don't annoy them enough to ban me, or just randomly cone up against their AI guardians.
On iPhone's case it's Apple and it is accessed by a password but in this hypothetical scenario it could be your own infrastructure or another device you own and that one can be password-less of that's your thing. Having one password to activate your authentication device is not a gin deal, it can be something like
mnemonic seed etc.
The hazard is not one password but endless ways to authenticate through endless passwords.
> If that's not %100 secure, it's OK because that's part of the risks about going against the authority.
I was more talking about border control, which has been known especially in the US to believe that non-US citizens have no rights at all and even the rights of US citizens and permanent residents have limited rights.
And for what it's worth you can end up having your devices seized or being executed by police simply for the "crime" of being at the wrong place at the wrong time because the cops managed to fuck up the warrant or its execution by blowing up the door at the wrong address.
Police going rogue are a threat model everyone should be aware of these days.
I guess the US has its problems. If that's an issue, don't go there and if you choose to go, don't bring a device with you that you can't tolerate being investigated by the authorities.
It's unreasonable to say that all your problems in life must be solved by this one tech or it's not worth having it.
Your solution to insecure passwords is to hand the keys of the internet to a small number of companies? This is beyond ridiculous.
Not only this creates single points of failure, it also entrenches the already existing monopolies out there. (not to mention forcing everyone to dox themselves by providing biometrics)
Uhm, it's the public key, and yes pretty much anyone can have my public key. That's why it's called the "public" key. The private key never leaves the dongle, much like in a hardware cryptocurrency wallet.
I really fail to understand the whole confusion about FIDO, the amount of misinformation in the threads here (like in the parent comment) is staggering. It is a fantastic invention for expanding privacy& security. Maybe a lot of people confuse it with "log-in with X"?
Uhm, it's the public key, and yes pretty much anyone can have my public key. That's why it's called the "public" key. The private key never leaves the dongle, much like in a hardware cryptocurrency wallet.
This is not true for passkey, which the linked article is about. If you set up passkey on a site on e.g. your Mac, the credentials will be synced through iCloud and you can also use your iPhone or iPad as an authenticator. This raises the questions of the grandparent comment:
1. What if a FAANG company nukes the account that you use for credential syncing?
2. What if a FAANG company has access to your private keys?
In the case of Apple, I think (2) is covered. They use iCloud Keychain, which is already end-to-end encrypted. But I am not sure about other companies (Microsoft, Google), does the standard require end-to-end encryption of key material?
Seems like they will leave up security to the OS vendors? From the white paper:
We expect that FIDO authenticator vendors (in particular those of authenticators built into OS platforms) will adapt their authenticator implementations such that a FIDO credential can survive device loss. [...]
Just like password managers do with passwords, the underlying OS platform will “sync” the cryptographic keys that belong to a FIDO credential from device to device. This means that the security and availability of a user’s synced credential depends on the security of the underlying OS platform’s (Google’s, Apple’s, Microsoft’s, etc.) authentication mechanism for their online accounts, and on the security method for reinstating access when all (old) devices were lost.
Since when does "if you want you can copy the private key out" mean that you have to do that? What kind of logic is that?
Similarly I can say that passwords suck because there are (entirely optional) centralized password managers. Passwords are unsafe because you can voluntarily share them! Do not use passwords! /s
>Non-techies have their passwords leaked and identities stolen time and time again. And get frustrated when they keep forgetting their password. Biometric passwords are incredibly useful for these people.
In other words, broad technology adoption will always dilute and degrade the quality of different technology solutions.
People will get even more frustrated when their flimsy physical authenticators break down. Passwordless authentication is perfectly okay for a service like a physical bank, employer etc. that can always check your identity and re-enroll you via some other means if need be. It's a disaster in the making for something like Google, Facebook, MS etc. etc. that offer no such thing. Make no mistake, the same people who pick weak passwords today will neglect proper care about any kind of disaster recovery scenario.
The best you could do is use a "software-based" or "virtual" authenticator that explicitly refuses to protect against a user cloning and subsequently restoring its identity, at least as a last resort. Not good enough for bank payments, ofc. but plenty usable for everything else.
> most of us actually know how to keep our passwords secure, at least relative to the average user.
I'm an IT professional and despite using a password manager, I still prefer a physical security solution to a cybersecurity one. An attacker might be able to dump unencrypted secrets from my password manager, but they'll have to pry that U2F token out of my cold dead hands.
The password field in its current inlined form has to go in one way or another.
> but they'll have to pry that U2F token out of my cold dead hands.
That is a trade off between attack surface and difficulty of the "hack" though. Depending on who and where you are, getting your physical token by force is trivial. If your token is important enough, there will be plenty of people that won't shy from using violence.
Getting a password or a PIN code to a smart card by force is trivial, too. Even if one's physical wellbeing is protected by law, they can still be jailed[1] for failing to provide a password, even if no such password exists.
The previous guy who educated me about rubber hose threat modeling later turned out to be running a drug dealing operation[2].
I use for some websites a Yubikey. But for me it feels very "blackbox like". I did read about a lot of functions on the internet, but I'm affraid to play with it, because I won't break anything. Also in the end, there is allways a second password for emergency. So if your password list leaks, the emergency password leaks with it for most people I guess.
> Some argue that users without biometric authentication will be flagged suspicious and pressured to hand over biometric data
The distinction is about the usage of modern technologies, not handing over biometric data. There will come a time where you will need to use a device with a Secure Enclave or a TPM. That device will keep biometric data on device. You will not hand in the data, you will have to use the feature.
Friendly reminder that device attestation is part of FIDO and webauthn.
It isn't enough that both your device and the website speak the protocol. The website gets to cryptographically verify that your device is not under owner control. If you want a FIDO/webauthn device that lets you have the key material (like HOTP does), you're out of luck. No owner-controlled devices allowed.
That's an important observation. We're going to go from "Log in with your Apple, Google, or Microsoft account" to "Log in with your Apple, Google, or Microsoft hardware".
Cloudflare is one example, using a security key not found in the FIDO Metadata Service, will not work on their site. This precludes the use of any hacker-friendly solution (making your own).
> Supported: All security keys found in the FIDO Metadata Service 3.0, unless they have been revoked for security reasons.
Attestation keys, aren't very "privacy friendly" either and it's much worse for those who wish to create their own key.
> Usually, the attestation private key is shared between a batch of at least 100,000 security keys of the same model. If you build your own OpenSK, your private key is unique to you. This makes you identifiable across registrations: Two websites could collaborate to track if registrations were attested with the same key material. If you use OpenSK beyond experimentation, please consider carefully if you want to take this privacy risk.
That's used for Cryptographic Attestation of Personhood, so the restriction to genuine devices has some rationale behind it. It's not quite about checking an identity for login purposes, so much as checking that an actual human being is operating the device, via e.g. biometrics or physically poking in a PIN. (As a CAPTCHA replacement.) The precise key you use ought not to matter for that purpose, it could even be "enrolled" on the spot.
This is actually a key point with respect to digital rights.
When you analyse the debacle that put Jordan Peterson in the spotlight
it was not really about gender pronouns as such, but about compelled
speech. Compelled anything is a problem. Compulsion removes
responsibility. For example, I am currently not compelled to own a
car, so I can make the choice to help the climate by walking. I am
not compelled to own a television, as in Orwell's 1984, so I can select
my own entertainment and news.
What does this have to do with cybersecurity?
Ownership and control are central to the psychological motivations
that allow security to work at all.
Let's ask the question, would you allow a stranger to dump their
possessions in your lounge for a few weeks? No. You'd at least charge
them rent for storage. Stuff that takes up your space, that you cannot
use, and are responsible for is an encumbrance.
Now let's say that a government or some cabal of tech-monopolists,
begin forcing you to shoulder encumbrances that you must install in
your home, or walk around carrying - physical devices that you do not
own in any real sense. This is an imposition. An analogy that
springs to mind is the US constitutional third amendment on the
quartering of soldiers in private homes without the owner's consent
[2].
With secure enclaves we are seeing a drift toward devices as both an
encumbrance and imposition. Hardware one is compelled to use and take
responsibility for, such as repair or charging batteries, which serves
the interests of others (to track and identify you) is a cost without
benefit, and has no future.
Unless you physically attach it, like a prison tag, you can no more
compel a person to carry an object than you can compel their speech.
Unless the owner acts by volition, and chooses to take responsibility
for an object in their own self-interest, various forms of sabotage
are inevitable.
I've seen this where workplaces have tried to push workers into
carrying a "company smartphone". It gets "lost". It gets
"broken". People lock them and then "forget" the password.
Security solutions need the consent and cooperation of both subjects.
Otherwise they are not "security", but simply means of control. At
the very least we should stop dignifying them with the word
"security".
[1] "Owner-controlled" is already a tautology. Ownership implies
control (and freedom to dispose of) and vice versa. Anything else is
simply "in my possession" or "upon my person" as legal-people would
say.
[2] I never understood this seemingly ridiculous language about
garrison of soldiers etc, but it increasingly makes sense to me as a
catch all for any agent placed on my property. Hence I think that
secure enclaves touch on 3rd amendment rights.
* I thought only one of these companies was notorious for spyware and surveillance? One has made not tracking/privacy a primary marketing angle, the other is kind of an also-ran on mobile?
That said I think I'm 100% on your side of the "hooray, lets require an account with some finite set of identity providers, wcgw?" debate
** This doesn't even make sense. The entire point of the modern secure elements is that they are a "real" hardware token. I get that a bunch of android devices are penny-pinching to the extent they may be dropping actual secure elements but the seems like something google could mandate real secure elements.
*** Agreed with you on <*> + A&G controlling the platforms, but curious about who is trying to get CPU-resident? I think I may have missed this latests insanity :D
Is it a good idea for me to buy Yubikeys or Titan keys for my personal Google account? My wife's? My childrens'? Will that make it less likely, or more likely, that I eventually get arbitrarily locked out? These are accounts that we use on Windows PCs, Chromebooks, and Android phones. The keys would never leave the house. I think.
I know the phones sorta fulfill this function already, but lost phone = big trouble, right?
I lost my phone on a business trip. It was a nightmare. Don't use your phone as a 2FA for everything. It creates a devastating single point of failure.
It's easier to keep multiple yubikeys than multiple phones.
Also Google relying on Google for anything important/critical is a bad idea. The internet is full of people complaining that Google deactivated their account for no apparent reason.
It's not, and it's worse since there isn't a PIN or biometrics keeping it locked. Keep one in a safe for emergencies (like losing a phone) that can also unlock your accounts.
>Is it a good idea for me to buy Yubikeys or Titan keys for my personal Google account? My wife's? My childrens'? Will that make it less likely, or more likely, that I eventually get arbitrarily locked out?
Yes. Buy the keys with at least one backup key. Treat the backup key(s) as 'critical data' and keep it in a fireproof safe or a safe deposit box. As you add more 2FA to existing accounts, you will need to periodically access this backup key and attach it to those accounts. It's extremely unlikely to be locked-out of your account if you authenticate with U2F key and password.
Highly recommended to get FIDO keys. Assuming you get multiple keys - and you can even share one physical key as a back up key across all family accounts, while each person has one personal key each - and register each phone as another fido key, and cross register parents keys with each other's, and for children's accounts. That way, every account will have at least 4 keys, with different physical exposure.
The chance of lock out is basically 0 at that point. Added bonus is that security key auth don't need most of the heuristics based phishing or credential theft detection applied, as they are provably unphishable. So the sign-in is less likely to be subject to false positives of heuristics.
WebAuthn is way better than (for example) OAuth in this regard, since the identity provider doesn't actually hold your private key; but I agree there are still potential concerns about vendor lock-in.
Ideally there will be third-party credential databases (just like with password managers), and a way to export your keystore and import it into another provider. That would solve this problem.
If you don't want your identity to be tied to one specific device, then you need somewhere to store your keystore that isn't on that device.
Technically you don't need a third or even a first party for that; you could self-host. In practice though, the overwhelming majority of users will be using some sort of cloud storage service. A third party service in this case is likely better than first party since I don't want to lose my credentials if I decide to use a different OS or browser, and browser/OS vendors haven't been great on cross-platform compatibility for credential storage thus far.
FIDO standard by itself is a great technology, very secure and very well-thought.
You can login to a service only if you possess a hardware token, which can't be cloned. Also, it automatically saves you from phishing sites.
But I share the fear of others that integration of this standard, curated by the top management of mega corporations, will lead to diminishing of its advantages.
FIDO keys give power to users, which is a threat to power hungry monster corporations.
As far as I understood, FIDO pretends to change the login/password security model to one based in biometrical data like fingerprint or faceid. But, didn’t we all agree that “biometrical
data is a login and not a password”? How can FIDO be more secure than different passwords per website or service?
Not quite right. FIDO2 uses a public/private key pair to authenticate you to a website. The private key is stored on a device you control. The device could be something like a Yubikey, which does not authenticate you, or like a mobile phone, which typically authenticates you using a biometric. In any case, the website sees a FIDO2 authentication request from you based on the private key, not on your biometric.
Well, the assumption is that ownership of the device and ability to unlock it (face | fingerprint | passcode) present 2 factors for authentication of the FIDO local keychain.
Passwords are time-tested and failed approach. They rely on user using different passwords for each system. Most people suck at passwords. Just look at your parents (or youngsters - your grandparents) - they probably suck. We probably would at their age also, except we use tooling like password managers that are still frustratingly difficult for non-techies.
can we stop with this ageism? There is plenty of teenagers or people in their 20s or 30s with ludicrous passwords. Unless you are a techie, passwords are chores and nobody wants to do complex chores.
Passwords have their problems but this will not make things more simple so I really doubt it will be more secure in the end. If it's adopted it will just change attack methods.
On my side, I do not trust any company mentioned in the article and do not use any of their product. If i'm required to have anything to do with them, I'll just be locked out (and I don't think I'll be the only one)
That's not quite correct. FIDO is using public key cryptography. To not transmit secrets. On an authenticator, the private key could be (locally) secured using Face ID or whatever, even just using a PIN.
Because there is no way to enforce those passwords are actually different or not endless variations of the same easy to guess password. Users have been doing all the wrong things with passwords ever since they were invented. I actually caught my father with an actual notebook of all the key passwords a few years ago and introduced him to Bitwarden. A burglar could have gotten access to pretty much everything with that notebook.
Most of my accounts are unimportant and nothing would happen if they stolen or hijacked. I totally write down those passwords somewhere. The important ones are not in writing anywhere (like bank logins). There is no reason to pretent all accounts are similarly important or that the same security policies should apply.
While there are a lot of smart people at the EFF, it's an organization primarily focused at defending legal rights. While we definitely want to consider the legal rights of an individual while coming up with a passwordless protocol, I put this work to technologists, not lawyers, first.
Meanwhile here in 2022 I still can't use my classic blue Yubikey to secure my Microsoft account, even though it works fine with Github/Google/Fastmail/etc. I don't want passwordless, I just want 2FA that isn't linked to my phone.
(If anyone knows why I get a "this security key can't be used" error when I try to add my Yubikeys to my msft account then I'd be interested to know. My theory is that the browser is trying to store a secret on the key but the key is immutable, but its very hard to tell.)
> My theory is that the browser is trying to store a secret on the key but the key is immutable, but its very hard to tell.)
Pretty much. Microsoft requires Webauthn resident key[0], which allows the authenticator to authenticate passwordlessly, including without a username or email[1]. This means the authenticator has to store the relying party ID (domain/hostname) and associate it with a new key, so that it won't sign login requests to a different website.
Is anyone working on a FOSS software solution for a FIDO2 authenticator? I know of Authy (which isn’t FIDO2, iirc) and some other projects, but none implement the passwordless login.
I have some time and the interest. I know attestation is an (optional?) part of FIDO2, and an OSS app will probably never be “verified”. Still, having a pure-software option/reference would be a boon.
Since this is extremely important for the future of the free web, I’m absolutely willing to help any project with these goals.
I've considered adding FIDO2 support to the software-only U2F token I wrote in Rust. It's a fair bit of work though, and I am not sure how comfortable I am with passwordless login unless the keys are kept purely in hardware such as a TPM.
That said, my reading of this post is that FIDO2 support will get built into Chromium directly, which is itself open source. Or if you do want a hardware key but running open software, I'd definitely recommend https://solokeys.com/, I've been following them for a long time.
> I am not sure how comfortable I am with passwordless login unless the keys are kept purely in hardware such as a TPM.
Shouldn't this be something for the user to decide? Using a TPM as a key is a bit silly; it amounts to turning the computer as a whole (strictly speaking, its motherboard and/or CPU) into a big smartcard/FIDO key. If that physical device breaks down, the associated identity is toast. A purely software-based key you can always back up.
Yes that's fair, it's a trade-off. Thinking more, there are really three levels of protection I see.
1. Keep keys in a file/the keyring. This protects them somewhat from non-root users on the same machine. It also provides phishing protection, which is really the most important aspect of U2F/WebAuthn to most people. If your computer is compromised, all the keys are compromised.
2. Keep keys in the TPM. The only additional protection over #1 is if you recover your computer after it is compromised, you can be reasonably certain the attacker could not make a copy of the keys and thus can no longer use them to authenticate. Arguably this is not a particularly useful protection.
3. Keep keys in the TPM/secure enclave and unlock them via biometric. This does provide meaningful extra protection if every use requires an unlock. Then if your computer is compromised, the attacker will have to either defeat the biometric unlock, or trick you into unlocking for every authentication attempt.
#1 is what I do right now for rust-u2f, and I think you're right #2 is not really useful. So maybe it's worth just implementing FIDO2 without worrying about TPM support. What I really was talking about was doing #3 for Linux, but I don't see a way to meaningfully accomplish it without tight hardware integration.
P.S. It is recommend to register multiple FIDO keys and/or have backup codes for accounts as a way to mitigate the issue of one physical device breaking toasting an identity. Not all providers may support this, for silly reasons.
Now that there's less eyeballs on this. Here's our open source JWT alternative:
https://github.com/Cyphrme/Coze. We've got a roadmap in using that to build alternatives to FIDO. Please feel free to message me with questions or anything else and I will detail our plans.
Before competing against FIDO, the libraries need to be far more ergonomic. Coze is a good start.
> I’m absolutely willing to help any project with these goals.
One way to view this is that they are formalising how password managers should work; integrated in the browser / device, and use public key certificates for logins instead of username / password.
There are of course side issues such as big vs small players, privacy, security etc, but that's not the major thing about this.
There already is, at least for the techies: YubiKey + OTP with your password manager.
Non techies are SOL as usual. Proper password education apparently is too hard, so big tech is frothing at the mouth to get total control over your password management so you can't ever leave.
You come to my phishing site thinking you are going to your bank site. You enter account and password. I enter that on the bank site. The bank site asks for your OTP code. My site then asks for your OTP code, which you enter because you think you are talking to the bank site. My site enters the OTP code on the bank site.
The authentication process effectively salts the response with both the web origin and the anti-replay counter. The result is then different for every web origin and so the phishing site would get a different authentication response that is incorrect/useless for the real site.
This protection is only for the classic phishing scenario where the user-agent is showing a different origin that is masquerading as the user's desired site. AFAIK, this won't solve the problem of Trojan apps where the authenticated user-agent itself can be subverted into making inappropriate requests. In such malware scenarios, you could also just hijack the other temporary session cookie/token secrets after the login step.
Just look at how the big FANG companies managed to adopt OpenID and yet made it so that it can't be used to authenticate users between them. They never liked identity federation because it would mean not controlling the user's identity completely. Google loves identity standards as long as Google is the one and only party doing the authenticating. Same with Microsoft. Apple for sure is not going to let anyone do anything that matters on their platforms.
That's why people still use simple email/password as a way to authenticate. It's just too much of a pain to implement anything else for independent web developers. Take fido/webauthn for example, which FIDO standardized and which then got restandardized/reimagined by the W3C as webauthn. That stuff was delivered years ago in a half-assed way. Pretty much the only widely used thing in this space is yubikey. The only people I know that actually use that or know about it are developers.
Nobody ever did the obvious thing of hooking this stuff up to biometric readers deployed by the billions in the form of phones using either photo recognition or finger print readers, laptop finger print readers, etc. Such an obvious thing to do. Never happened. What are Google and Apple waiting for? What's the holdup? Do they lack the imagination or is there another reason that perfectly sane proposals from engineers they employ to do something like that consistently get shot down by management? Which is probably exactly what happened. Over and over again.
They fundamentally don't like security standards that reduce their grip on user identity and "owning" that.