Basically, your phone and computer (or USB key) will be able to generate keys and authenticate to each website uniquely, without the possibility of someone stealing your password or phishing it/spoofing the site.
It's a fantastic improvement, imagine if you could log in to a site on any untrusted computer just by plugging your USB key in, and you would be totally secure again after you logged out of the site.
Welcome to twenty years ago and client SSL certificates.
I'm glad the future is catching up.
There was a real big industry push back then and many expected smartcards to take over, but it fizzled as hardware companies generally doesn't do software well and nobody really wanted the added complexity of extra hardware.
Maybe it will go better now as individual users have a use case, as things like gmail and coinbase supports it, and not just the usual enterprise platforms.
It is true that I don't know of any currently existing authenticators that support biometric authentication, but what's nice about this spec is that the biometric/PIN/etc stuff is never exposed to the server. When new kinds of authenticators enter the market (like if Google adds the feature in an Android system update), they can immediately be used without having to change any code on the servers. And like you say, even without the multi-factor stuff it'll be a fantastic improvement over the current dumpster fire that is passwords.
Full disclosure: I work at Yubico and am one of the editors of the Web Authentication spec.
Fitting keyboards with NFC devices could make it even better.
As long as the user must do something on the device to approve it, otherwise you can have proximity scanning attacks as with RFID.
You could conceivably have a login sessions page and access it from a trusted computer to force logout of a session from an untrusted computer.
But it seems the bigger problem with login from a malicious computer is that the computer could do whatever it wants with your session while you're logged in, regardless of how securely you can logout later.
(1) people frequently (and rightly) do that anyway, so we might as well make it less bad.
(2) Building a trustworthy computer is largely about isolating different components in a way that hurts malware more than it does the user-experience. Sticking keys on a USB key is one instance of such isolation.
This would leave a window of x seconds after logging out in which attacks could occur but that could be mitigated by requiring a failed request on logout (perhaps require the usb key to be removed upon logout, send an auth-ping, and only give a successful logout message if the auth-ping fails. (Even without this mitigation I'm not sure what kind of attack would work for the inbetween X seconds, but might as well mitigate it in case someone else could think of something)
Of course there's only so much you can do if your hardware is compromised, you can just try to mitigate the issue as much as possible and hope for the best.
All signed access token, when logged out or revoked, should be marked so on the server side. Of course the untrusted computer should at the very least send the logout request.
Typically, a service should also let you see all your active sessions and let you revoke them.
You're describing plugging a USB key containing your identity into an "untrusted computer".
It's hard to call that a "fantastic improvement" or really anything other than a "terrible idea".
This is miles better than password-based login.
Tell that to Yubico et al, who have based their businesses exactly on this "terrible idea", making secure USB devices that can be plugged in to insecure computers.
Well, no machines more untrusted than the ones I do work or internet banking on.
I hope people don't think of Yubikey-type devices and "magic security pixie dust" that somehow makes it OK to do sensitive things on an internet cafe pc??? (Fully prepared to be told anecdotes of people doing stuff I'd consider crazy in response to this...)
A USB device like a yubikey could provide an untrusted (or even trusted) devices with only a temporary access token. The untrusted device can only manipulate a user session with the user present, once the user has left, the session expires and the computer has no access token.
WebAuth makes it impossible to continue or begin a session without being authorized to do so if a USB device is used.
TOTP is one implementation of a "trusted device authorizes untrusted device" protocol that Yubikey/WebAuth/others enable.
Compare with password logins: Once you enter your password you trust the device you entered it on to not compromise it.
But we still need the phones to support U2F.
As it stands, my pass phrase and fingerprint are the weak links, as phone logins rely on TOTP. One day I’ll be able to tap my phone against my Yubikey along with my pass phrase, and/or my fingerprint. That would be lovely.
That works fine for me on Android.
What segregation between these is there within the phone? Doesn't it mean the fingerprint gets you the keypair for free?
edit: The original headline before being changed, "Web standard brings password-free sign-ins to virtually any site", and contained a paragraph espousing fingerprints in place of passwords.
Consider a simple fingerprint USB vault which stores your keys:
* Factor 1: You must have physical possession of my vault.
* Factor 2: You must be me or have a convincing fake of my fingerprint.
Before we even think about a password I've already prevented almost all of the attacks I'm likely to ever encounter against my accounts.
* I have made it impossible for someone to casually break into my accounts/device.
* I've created enormous distance between myself and remote attackers.
* I've eliminated password reuse and contained the effect of data breaches to the service that was breached.
* I've made it much more difficult for network operators to carry out MitM attacks since tokens are origin bound and the challenges are real-time with replay protection.
Yes in a forum of nerds you can point out that lifting fingerprints is possible but if everyone switched to this simple U2F device the world would be far far more secure. Passwords optional.
Then if you're worried about a more sophisticated attackers like corporate espionage or governments you can add a password.
And you can't easily change them.
I think for most people convenience remains a win over the marginal increase in risk — someone who can get that close to you can also use a hidden camera/drone to watch you enter a password, steal your wallet/bag with two-factor codes, etc.
How does registering and unlocking another phone show that it would work on her phone?
I really don't think TouchID is at all riskier than even a 6-digit passcode. I really still wish Apple allowed multi-factor unlocking though.
I don't remember how/if it was demonstrated that the fingerprint was a useful copy in any way (and certainly not on that official's iPhone).
So you really have Factor 3: You must have the password to power on the fingerprint reader.
The fingerprint isn't being used as the password.
Only because you have to lift and then manufacture them from scratch from glue and silicone and stuff. If someone automated the process it would require little to no effort. In theory it would be possible to manufacture a device that could present any given fingerprint when scanned with a popular scanner. You leave them everywhere, even on the scanner itself.
It is also a limitation of biometrics that you can only use them once. It might make sense for a phone, but after you have given Google your fingerprints, they can in turn use them for other purposes. It's like reusing a password that's also tricky to rotate.
Both your fingerprints and the vault can be taken from your person without your consent; a password much less easily.
You can't replace passwords with fingerprints, as you still need a backup to update said system.
You know this how? There has not actually been in real studies done, and the FBI and Law Enforcement resist any efforts to do studies on how Unique Fingerprints really are.
It is the bedrock of criminal prosecution many centuries, but they do not want any public analysis into how many people share similar prints..
Print Reading, even by a computer, is more of an art form, a massive guess, than it is a science.
Same could go for websites. A simple biometric factor like fingerprints is easy and friction free enough to log in users on previously seen devices. My password can then also be much longer and the website can impose stricter rules (longer passwords, no breached passwords, etc.) without increasing user friction that much either since most people will probably only log in from 4 devices at most (desktop, phone, tablet and some library computer)
2. A fingerprint hash isn't a cryptographic hash because you need to be able to match to nearby matches. A small variation in input needs to have a small variation in the hash so a distance function can be applied.
Those are terrible properties for a password.
Many builders/carpenters/etc will tell you this is not true. People who work in abrasive environments sometimes without proper protection often temporarily have no fingerprints as they are "warn off".
Many injuries can effectively modify or remove the too, at least temporarily.
This makes them bad usernames as well as bad passwords.
Do they come back in the same form as they previously were?
Additionally, I was under the impression that some fingerprint readers looked at the blood vessels rather than the actual prints. Not sure how that would be affected by abrasion. Perhaps this is a misunderstanding on my end.
Would you create a rubber stamp of your passwords, slather it with oil, and then go around stamping it on everything you own and everywhere you go?
I'm struggling to remember the protocol from a sci-fi novel where two secret agents who (separately) had their minds transferred to new meat sacks reconnected in a new (hostile) environment.
I think it had three parts: What you have, what you know, what you are.
We verify the server's identity though it's public certificate that's signed by a certificate authority. The server can verify the client's identity via a public client certificate that's signed by an authority the server trusts. It's already possible to do this over a TLS connection.
If the finger print is what you are, and password is what you know, then what is the "what you have"?
Mostly I'm curious if that sci-fi books' "three factor auth" scheme (because I don't know what else to call it) is a feasible model.
One possible form of "three factor auth" would be to use a passphrase for the private key, the client certificate to connect with the server over TLS, and a username/password login at the application level.
The certificate/certificate fingerprint is what you have, the password and passphrase are two things that you know. I don't know what would fit under the "what you are" category though (unless you're considering some sort of biometric based method).
A machine can obtain Certificates from a CA which show the CA validated its identity, and use Public Key Cryptography to prove this is its certificate. This is how HTTPS works when you connect to a remote web site, but it can be mutually authenticated too, that's just not how web browsers use it.
I'm not feeling great about these algorithm choices. Namely: RSA with PKCS1v1.5
See also: https://robotattack.org
EDIT: Also, elliptic curve digital signatures with a random nonce (here, "r") instead of deterministic nonces makes me feel itchy.
The "Use Cases" section in particular does a pretty good job of explaining how this would work from a user's perspective: https://www.w3.org/TR/2018/CR-webauthn-20180320/#use-cases
Many corporations and agencies use certificates either as the primary (only) auth method, or as supplementary auth. For example, DTrade, the US defense export licensing system: http://www.pmddtc.state.gov/DTRADE/index.html
We use them at work and they work great. Pretty much instant auth to any internal site with no shenanigans.
Oh, and some companies have things so locked down that users can't install client certs at all!
Things have barely changed here, so I avoid certs for users - having said all that, they work great for authentication between software components where there are no pesky users involved.
As for downsides, yes, no exporting: users have to go through the enrollment process with each new device.
You wouldn't say that if you'd had to support our users!
At least when we created the system, the method of private key generation differed from browser to browser. Then we had to sign their CSR in our offline PKI system and email them a password-protected PKCS#12 certificate bundle, and send them the password out-of-band. Finally, they had to import the bundle into their browser. It could have been simpler, but "big corporation" :(
The whole _point_ of a CSR is that you don't need any secrets. The CSR proves that its creator has some private key in their possession and wants a certificate. You need to verify that whoever presented the CSR is really who they said they are (this is hard, hence ACME for the Web PKI, but it's out of scope for the Public Key Crypto problem) and then you issue a Certificate, which isn't secret.
Emailing PKCS#12 bundles (which have a key inside them) is asking for trouble but more importantly here: if the user already has a perfectly nice key, why are you sending them a new one ?
The idea is each cert identifies a device. There's no reason to not just issue one cert per device, especially if you make the enrollment process fairly painless. If a user loses their phone, it's very nice to not have to revoke their one global cert -- just revoke the one that was used on the phone and be on your way.
Google proposed, and Chrome implements, Origin-Bound Certificates (http://www.czeskis.com/research/pubs/tls-obc.pdf) which are just about as good and with no UX impact. Section 8 covers many of the problems with TLS client certs.
AFAIK Chrome has the only implementation of OBC.
The big problems you have are private key generation, certificate distribution, revocation, renewal, and guiding your users on how to install the damn things.
With client certs, I sign into Facebook, Hacker News, LWN, and all three of them get told "This is tialaramex, serial number 183749265". So does LegitAds, and XHamster, and Penny Arcade. Which is great for advertisers, but probably not what most users want.
With this technology, each site gets a consistent but site-specific identity. So they know it's still "me", but they have to collaborate out of band to discern that all these people are the same person, as they would without any authentication framework.
The requirement for user action to authenticate has a benefit here too - advertisers even if they have Facebook's permission to check which Facebook user I am, daren't risk saying "Er, hi, can you just sign into Facebook for us?" because it's obvious what's going on. Only the places which already explicitly need authentication will ask.
I think it's a less bad alternative for people using the same password on everything. As many comments here already pointed out, it's not perfect. I don't feel like I know enough about it to know just how good it will be for the average person.
See also: that SQRL thing Steve Gibson has been working on for years?
Hopefully the web authentication standard won't suffer the same fate. It _is_ backed by several major companies (Google, Microsoft, Mozilla) so at least they'll be able to kickstart adoption by implementing the API on their own sites.
> Authentication and identification is combined
This is plain false. There's nothing in the spec that says you can't give a site your email address, and there _is_ a built-in way to revoke credentials in the spec.
> Single point of failure
This is dumb. Password managers have exactly the same flaw and nobody seems to have a problem with that.
> Social Engineering attacks
This true, but only in exactly the same way that password managers are vulnerable. (If the user is for whatever reason not using a browser plugin to authenticate, _and_ you can trick them into entering their info on the wrong site.)
1. Easier to set up new accounts. No fiddling with onerous password requirements or text boxes; just one click and you're done.
2. More secure and/or convenient than password managers when signing in on a public computer (scan QR code with phone vs. load entire password DB onto computer or manually retype password via keyboard)
3. Better recovery against the worst-case scenario of database leak (SQRL client can transparently and automatically rotate your credentials, vs having to do it all manually with a password manager, and there's no list of sites you have an account on that the attacker can use against you)
4. Possible for sites to enforce the use of SQRL, whereas password managers need a password field to function, thereby encouraging users to continue insecure practices like weak passwords and password reuse.
5. Public keys instead of bearer tokens means you don't have to worry about rotating credentials if a site leaks its database.
Like I said, it's pretty much just a password manager, but better.
At this point though it seems pretty likely that the Web Authentication Standard has successfully overcome that barrier. It's already partially implemented in multiple browsers (Chrome, Firefox) and backed by a W3C Candidate Recommendation. In the face of that, I don't believe SQRL can compete.
It's the successor of the work Google did with Yubico that eventually led to Fido U2F. Google did some user research about the rollout amongst their 50k employees at the time: [Security Keys: Practical Cryptographic Second
Factors for the Modern Web](http://fc16.ifca.ai/preproceedings/25_Lang.pdf).
The primary innovation here is browser buy-in, and API hooks for onboarding to make the UI/UX of the process suck less. This is a good thing.
This thing has a masking step, so you always prove to any particular site that you're the same person you were the last time you visited. But you don't (deliberately) present them with any clue as to who that is exactly.
There's even a deliberate choice where though you can specifically identify models of product [ e.g. "This is a Mattel Barbie brand authenticator" so that in principle a bank could agree to use this standard but they only want to accept official bank-branded tokens ] the specification says DO NOT put unique serial numbers inside the tokens, if you want to distinguish them, make sure only large runs are distinguished e.g. "Batman authenticator" versus "Barbie authenticator" but never distinguish "Steve's authenticator" versus "Dave's authenticator" so as to preserve the privacy mechanism.
OpenID was supposed to help here. But even stalwarts like Stack Exchange are backing off from it, leaving a handful of proprietary links and their own OpenID provider.
Of course this is going to help with higher complexity authentication methods (retinal scanners in the 2028 Macbook Pros?) but it's all for naught unless we can patch every PHPBB and Magento clone to stop being so needy when it comes to passwords.
1. What if I lose my finger/password?
2. What if someone else gets a copy of my finger/password?
In both cases, fingerprints seem to make things worse, not better. The fingerprint is great if someone else has to be present to watch you put your finger down.
Attackers are much more likely to assume a user's identity and thus acquire the user's certificates for such a system. That's how most compromises today happen.
The central issue is verifying a claimed identity - not just at enrollment time but also on every use. The spec waves that off to 'Authenticators' (see below). But that's not a solution. If user authorization to certificates are based off passwords, for example, then what prevents an attacker from taking compromised data about me and using that to get access to passwords?
Don't get me wrong - this is good work. But I just think everyone seems to be focussing on the wrong end of the stick.
The technical process by which an authenticator locally authorizes the invocation of the authenticatorMakeCredential and authenticatorGetAssertion operations. User verification MAY be instigated through various authorization gesture modalities; for example, through a touch plus pin code, password entry, or biometric recognition (e.g., presenting a fingerprint) [ISOBiometricVocabulary]. The intent is to be able to distinguish individual users. Note that invocation of the authenticatorMakeCredential and authenticatorGetAssertion operations implies use of key material managed by the authenticator. Note that for security, user verification and use of credential private keys must occur within a single logical security boundary defining the authenticator.
However, as you said, it seems like the other piece of the puzzle (the authenticators) isn't fully baked yet. U2F is one existing option, but that's not really suitable as a password replacement all on its own (it's just "what you have"). Hopefully Google, Mozilla, and Microsoft (all of whom contributed to development of this standard) have some ideas for standard cross-browser, cross-platform authenticators that provide the other authentication factors.
If the user decides to use 12345 as his password for a Web Authentication authenticator that's obviously less than ideal, but still impossible to phish (the browser validates the domain the user is on), infeasible to brute force without access to the user's device (where the actual key pair is stored), and impossible to compromise in a data breach (the site only has a public key).
That's the point of this solution. If we can remove the vast majority of account-compromise tactics and force attackers to achieve success only if they "get one admin," then that's an incredible victory -- especially for the large number of ordinary people who are not admins.
If I visit g00gle.com and sign-in using the Web Authentication API, my browser is going to use my credentials for g00gle.com, not for google.com; unlike me, it _can't_ be fooled by similar-looking characters.
In the age of punycodes this has become particularly important because the human eye cannot visually distinguish between ASCII and punycode lookalikes - many are visually indistinguishable in many fonts.
If your concern is "well, there's always the admin/support backdoor", i.e. a compromised admin account or a social engineering attack on support personell could lead to attacker-controlled keys being enrolled, I'm afraid that's not really something you can solve by just throwing new technology at the problem. However, you'll definitely make it harder to even get to the point where admin accounts are compromised by rolling out U2F or webauthn.
Personally, I'm perfectly fine with a technical solution that solves phishing for everything but the most advanced social engineering campaigns.
That is not great, but it's much much better than the 500 million people in the Have I Been Pwned database.
Instead of telling how this solution is not perfect (no solution is perfect) tell us how this solution is worse than our current situation. The fact is, it's not. It's actually better in a number of ways as other comments have pointed out.
Maybe I'm misunderstanding your point, but an attacker will also need physical access to your authenticator. That's a much less scalable attack vector.
As for a real problem, others pointed out phishing already. U2F and Webauthn aren't just a certificate management solution, but a protocol for the user agent to make trusted statements to the server.
So, I don't see how the presence of that problem suggests it's not worthwhile to work on better types of credentials.
This transfers the problem away from both of those groups. Individual no longer have to create multiple secure passwords and website developers no longer have to store those precious secrets. This is a huge improvement. You are correct that it's not perfect, but instead of security pros spending all their energy teaching people how to manage passwords, they can focus on the remaining problems that you point out.
No, it completely eliminates several huge vulnerability domains. Just because it doesn't solve a different one doesn't make it bad.
So far, https://www.imperialviolet.org/2018/03/27/webauthn.html is the closest; it’s good, but I’d love to have something with more straightforward instructions—“so you currently use U2F? Change this to this, that to that, do this to maintain compatibility and you’re roughly done”. We’ll figure it out for FastMail, but if the migration is straightforward and well-documented it’ll help all the web services out there to move sooner rather than later, which as a user I would very much like.
It would also probably be easier/(or at least not harder) for people than username and password. Just tell users to press the button and enter a password to sign in.
To me this looks like a duplication of efforts
I absolutely hate getting told to go get my phone or go to my email.
This also makes me a lot more paranoid about losing my phone.
I think I'll pass.