Hacker News new | comments | show | ask | jobs | submit login
Web Authentication: Proposed API for accessing Public Key Credentials (w3.org)
456 points by jtbayly 3 months ago | hide | past | web | favorite | 196 comments



Url changed from https://www.engadget.com/2018/04/10/fido-w3c-web-authenticat..., which points to this.


The article is a bit misinformative, with the biometrics red herring. Unless I'm mistaken, this is Webauthn, the new standard that will allow devices to authenticate using cryptographic keys that they have generated and stored, it's kind of like supercharged U2F.

Basically, your phone and computer (or USB key) will be able to generate keys and authenticate to each website uniquely, without the possibility of someone stealing your password or phishing it/spoofing the site.

It's a fantastic improvement, imagine if you could log in to a site on any untrusted computer just by plugging your USB key in, and you would be totally secure again after you logged out of the site.


> your phone and computer will be able to generate keys and authenticate to each website

Welcome to twenty years ago and client SSL certificates.

I'm glad the future is catching up.


Unfortunately OS and Browsers never made the installation of SSL certs user-friendly. Also the requirement for a private key to be accessible by the PC we'd be using doesn't really help, generating OTP/keys on a separate device seems to be the way to go.


There was never any requirement that the private key be accessible from the local computer. Netscape/NSS had pluggable security modules with full support for PKCS#11-style smart cards back in 1998, I believe, so the 20 years ago isn't even overstating things.

There was a real big industry push back then and many expected smartcards to take over, but it fizzled as hardware companies generally doesn't do software well and nobody really wanted the added complexity of extra hardware.

Maybe it will go better now as individual users have a use case, as things like gmail and coinbase supports it, and not just the usual enterprise platforms.


I don't see what you mean is misinformative. The protocol is designed in such a way that an authenticator (e.g., a smartphone or a USB dongle) can check a biometric factor (or a PIN or something else) locally before allowing use of the private key. The authenticator also provides the server with its attestation certificate when you register it, and using that attestation certificate the server can verify what kind of authenticator it is and trust that the authenticator can be used as multiple kinds of authentication factor.

It is true that I don't know of any currently existing authenticators that support biometric authentication, but what's nice about this spec is that the biometric/PIN/etc stuff is never exposed to the server. When new kinds of authenticators enter the market (like if Google adds the feature in an Android system update), they can immediately be used without having to change any code on the servers. And like you say, even without the multi-factor stuff it'll be a fantastic improvement over the current dumpster fire that is passwords.

Full disclosure: I work at Yubico and am one of the editors of the Web Authentication spec.


That would be a good use for a smart-watch.

Fitting keyboards with NFC devices could make it even better.


> Fitting keyboards with NFC devices could make it even better.

As long as the user must do something on the device to approve it, otherwise you can have proximity scanning attacks as with RFID.


Thankfully NFC can be two-way.


Or good old smart cards with smartcard reader in the keyboard again?


You cannot use an untrusted computer to reliably log out.


> You cannot use an untrusted computer to reliably log out.

You could conceivably have a login sessions page and access it from a trusted computer to force logout of a session from an untrusted computer.

But it seems the bigger problem with login from a malicious computer is that the computer could do whatever it wants with your session while you're logged in, regardless of how securely you can logout later.


Yes. As soon as you use an untrusted computer, there is no telling what you are vulnerable to. But:

(1) people frequently (and rightly) do that anyway, so we might as well make it less bad.

(2) Building a trustworthy computer is largely about isolating different components in a way that hurts malware more than it does the user-experience. Sticking keys on a USB key is one instance of such isolation.


If, as GP suggests, you have to plug a USB device in to auth the first time, why not just run the protocol through the device for every single HTTP request? Then, if the user unplugged the device, future requests would fail, right? I don't know if the protocol under discussion forbids that, but it seems like an easy change?


A USB device would only be one of many options - relying on it would be pretty backward looking.


Other things would have other options. You might have a bluetooth or nfc device that has a "momentary" button, that e.g. you have to press less than five seconds before any particular web request.


That's a straight path to frustration on today's web


maybe, a better solution might be just doing a serverside 'authenticity ping' every x seconds, and if that fails then you get kicked out.

This would leave a window of x seconds after logging out in which attacks could occur but that could be mitigated by requiring a failed request on logout (perhaps require the usb key to be removed upon logout, send an auth-ping, and only give a successful logout message if the auth-ping fails. (Even without this mitigation I'm not sure what kind of attack would work for the inbetween X seconds, but might as well mitigate it in case someone else could think of something)


Might as well just boot linux from a USB at that point.


sadly just being able to boot your OS is not enough to trust the resulting hardware+OS combo


If login becomes quicker and less disruptive it might make sense for websites to default to expiring sessions a lot faster. If all you need to do is scan your watch or press a button on your USB token it doesn't matter if you have to do it 10 times every day. If that's still not enough websites could ask for confirmation for every potentially sensitive action.

Of course there's only so much you can do if your hardware is compromised, you can just try to mitigate the issue as much as possible and hope for the best.


Nor you can today - a completely untrusted computer could fake the logout experience and keep a valid access token.

All signed access token, when logged out or revoked, should be marked so on the server side. Of course the untrusted computer should at the very least send the logout request.

Typically, a service should also let you see all your active sessions and let you revoke them.


Especially when the norm for refresh tokens is like, 90 days.


> It's a fantastic improvement, imagine if you could log in to a site on any untrusted computer just by plugging your USB key in

You're describing plugging a USB key containing your identity into an "untrusted computer".

It's hard to call that a "fantastic improvement" or really anything other than a "terrible idea".


How so? The private key never leaves the device, the certificates are site-based, preventing phishing, and they require actual physical interaction to authenticate.

This is miles better than password-based login.


> "terrible idea"

Tell that to Yubico et al, who have based their businesses exactly on this "terrible idea", making secure USB devices that can be plugged in to insecure computers.


For the record, I have never plugged either of my Yubikeys into an "untrusted" computer.

Well, no machines more untrusted than the ones I do work or internet banking on.

I hope people don't think of Yubikey-type devices and "magic security pixie dust" that somehow makes it OK to do sensitive things on an internet cafe pc??? (Fully prepared to be told anecdotes of people doing stuff I'd consider crazy in response to this...)


In theory a yubikey is safer than simply logging into a website using passwords when the device is untrusted.

A USB device like a yubikey could provide an untrusted (or even trusted) devices with only a temporary access token. The untrusted device can only manipulate a user session with the user present, once the user has left, the session expires and the computer has no access token.

WebAuth makes it impossible to continue or begin a session without being authorized to do so if a USB device is used.

TOTP is one implementation of a "trusted device authorizes untrusted device" protocol that Yubikey/WebAuth/others enable.

Compare with password logins: Once you enter your password you trust the device you entered it on to not compromise it.


It's still only one or two of the three authentication factors.


A phone provides a fingerprint scanner (something you are), and holds the keypair (something you have) and when combined with a password (something you know) you have all three.

But we still need the phones to support U2F.


U2F is actually the older standard, Webauthn can work with U2F, but it adds some things like user identifiers that make it more useful to authenticate with.


You're right, my bad. But I still do wish phones even supported emulating the existing FIDO U2F standard.


Agreed. I adore U2F, and even have it running under Safari because I love it that much; but without phone support for it, my cute Yubikey really only helps for desktop usage currently.

As it stands, my pass phrase and fingerprint are the weak links, as phone logins rely on TOTP. One day I’ll be able to tap my phone against my Yubikey along with my pass phrase, and/or my fingerprint. That would be lovely.


I was thinking about soft U2F tokens and came to the conclusion that they wouldn't be that much different from a cookie that just always bypassed the second factor for that specific browser. Of course, it could be stolen from the browser, but so could the U2F key.


The browser never sees the key with U2F; everything is computed on the USB device.


I'm talking about soft U2F tokens, i.e. software-based.


> One day I’ll be able to tap my phone against my Yubikey along with my pass phrase, and/or my fingerprint. That would be lovely.

That works fine for me on Android.


Me too :/ It would be great against phishing.


a fingerprint scanner (something you are), and holds the keypair (something you have)

What segregation between these is there within the phone? Doesn't it mean the fingerprint gets you the keypair for free?


If you also have the physical phone in your hands, yes.


You now have a severed finger. Where's the phone? You'll still need both items.


the point is, it relies on JavaScript feature, right?


Fingerprints are usernames, not passwords (2013): http://blog.dustinkirkland.com/2013/10/fingerprints-are-user...

edit: The original headline before being changed, "Web standard brings password-free sign-ins to virtually any site", and contained a paragraph espousing fingerprints in place of passwords.


This meme needs to die. Fingerprints are a perfectly fine authentication factor. They are unique enough and require effort to fake.

Consider a simple fingerprint USB vault which stores your keys:

* Factor 1: You must have physical possession of my vault.

* Factor 2: You must be me or have a convincing fake of my fingerprint.

Before we even think about a password I've already prevented almost all of the attacks I'm likely to ever encounter against my accounts.

* I have made it impossible for someone to casually break into my accounts/device.

* I've created enormous distance between myself and remote attackers.

* I've eliminated password reuse and contained the effect of data breaches to the service that was breached.

* I've made it much more difficult for network operators to carry out MitM attacks since tokens are origin bound and the challenges are real-time with replay protection.

Yes in a forum of nerds you can point out that lifting fingerprints is possible but if everyone switched to this simple U2F device the world would be far far more secure. Passwords optional.

Then if you're worried about a more sophisticated attackers like corporate espionage or governments you can add a password.


The difference is you leave your password everywhere you go. Doesn't matter how unique it is, if I leave a sticky note with a password everywhere I touch, then it's not very good security.


Fingerprints are also not that reliable. Some gun safes that use fingerprints are notorious for opening too easily with wrong fingerprints. On the other end of the spectrum, India is struggling with their fingerprint authentication system, as the system fails to recognize the fingerprints on file. [1]

[1] https://scroll.in/article/857274/now-even-the-fingerprints-o...


> The difference is you leave your password everywhere you go.

And you can't easily change them.


If you have something as sophisticated as TouchID, it's actually hard to replicate a valid print. CCC did it in a lab situation (wine glass + latex paint), how is a smear on a tabletop or mug going to be retrievable for a sophisticated scanner?


If I recall correctly, someone was able to fool TouchID a few years ago using a fake print created purely from pictures of a German government official.


That was impressive but it's still something of an edge case: he used it to register & unlock a phone but did not unlock her phone and it sounds like it required a photograph taken a few feet away at a press conference using a big lens:

https://www.youtube.com/watch?annotation_id=annotation_26842...

I think for most people convenience remains a win over the marginal increase in risk — someone who can get that close to you can also use a hidden camera/drone to watch you enter a password, steal your wallet/bag with two-factor codes, etc.


Given that YT link ist im Deutsch I'm going to pass on decyphering it.

How does registering and unlocking another phone show that it would work on her phone?

I really don't think TouchID is at all riskier than even a 6-digit passcode. I really still wish Apple allowed multi-factor unlocking though.


That’s the English translation but, yes, recovering a fingerprint which can be used to unlock a test device doesn’t show it can replace the original. It’s prudent to assume that the attacks will get better but I think this really highlights the difference between broad and targeted attacks since so many people are better off with the fingerprint unlock.


I recall it was not TouchID that was fooled, but Starbug just copied her fingerprint via photos [1].

I don't remember how/if it was demonstrated that the fingerprint was a useful copy in any way (and certainly not on that official's iPhone).

https://www.theguardian.com/technology/2014/dec/30/hacker-fa...


Additionally, phones that have biometric features first require a PIN to be turned on or unlocked after a period of time.

So you really have Factor 3: You must have the password to power on the fingerprint reader.

The fingerprint isn't being used as the password.


Well, most phones since the 5s yes.


> require effort to fake

Only because you have to lift and then manufacture them from scratch from glue and silicone and stuff. If someone automated the process it would require little to no effort. In theory it would be possible to manufacture a device that could present any given fingerprint when scanned with a popular scanner. You leave them everywhere, even on the scanner itself.

It is also a limitation of biometrics that you can only use them once. It might make sense for a phone, but after you have given Google your fingerprints, they can in turn use them for other purposes. It's like reusing a password that's also tricky to rotate.


That's the nice thing about webauthn biometrics, though: the biometric data is never sent to the server. The test is done locally, and the server can trust it by verifying a cryptographic attestation of the authenticator's capabilities. And on the flip side, the user can opt in to biometric authentication even if the server does not require or care about it.


Both of your factors boil down to the physical presence of two things that will usually be in close proximity to each other. Is it really fair to consider that “two-factor”?

Both your fingerprints and the vault can be taken from your person without your consent; a password much less easily.


If somebody has the ability to force me to give up my fingerprints, I am also going to give them my password. Hard to say no to somebody with a gun.


The adversary under consideration is more likely the courts.


You leave latents all over the place.


I semi-regularly have my fingerprint distorted by cuts and burns to the point where they can't be identified by a scanner.

You can't replace passwords with fingerprints, as you still need a backup to update said system.


> They are unique enough

You know this how? There has not actually been in real studies done, and the FBI and Law Enforcement resist any efforts to do studies on how Unique Fingerprints really are.

It is the bedrock of criminal prosecution many centuries, but they do not want any public analysis into how many people share similar prints..

Print Reading, even by a computer, is more of an art form, a massive guess, than it is a science.


Fingerprints are IMO fine for reoccurring logins. I set my phone to require my password on first boot but while it's running a fingerprint is good enough.

Same could go for websites. A simple biometric factor like fingerprints is easy and friction free enough to log in users on previously seen devices. My password can then also be much longer and the website can impose stricter rules (longer passwords, no breached passwords, etc.) without increasing user friction that much either since most people will probably only log in from 4 devices at most (desktop, phone, tablet and some library computer)


The fingerprint is just to authenticate to your local device. So an attacker with access to your fingerprint can't get at your account without also having your phone (which stores the private key _actually_ used for remote authentication in secure, tamper-resistant hardware).


Forgive me ignorance, but wouldn't backends needs to store a "hash" of your fingerprint to validate? This essentially means a single giant password.


1. A fingerprint is something you cannot change, cannot revoke if leaked and cannot be unique across different sites.

2. A fingerprint hash isn't a cryptographic hash because you need to be able to match to nearby matches. A small variation in input needs to have a small variation in the hash so a distance function can be applied.

Those are terrible properties for a password.


> A fingerprint is something you cannot change

Many builders/carpenters/etc will tell you this is not true. People who work in abrasive environments sometimes without proper protection often temporarily have no fingerprints as they are "warn off".

Many injuries can effectively modify or remove the too, at least temporarily.

This makes them bad usernames as well as bad passwords.


> People who work in abrasive environments sometimes without proper protection often temporarily have no fingerprints as they are "warn off".

Do they come back in the same form as they previously were?


Either way, this is a terrible argument toward good security. "Oh, someone got a copy of your fingerprint? No problem! There's a belt sander right over there!"


This is a sore point with the new iphones for me. It happens at least once a week with routine hobby-farm work. I'm really looking forward to getting faceid.


Interesting. I wonder if anyone has studied what it would take to render fingerprints useless? Like N minutes/day of sanding with Y grit sandpaper?

Additionally, I was under the impression that some fingerprint readers looked at the blood vessels rather than the actual prints. Not sure how that would be affected by abrasion. Perhaps this is a misunderstanding on my end.


An example that can affect many people: my iPhone can't read my fingerprint when I'm sweating at the gym.


IMO, you don't even need any arguments beyond #1. Your fingerprint can't be changed and it's public information. You leave traces/copies of it everywhere you go. Just because it's more difficult to read now doesn't mean that technology won't make lifting and reusing your oily prints trivial someday.

Would you create a rubber stamp of your passwords, slather it with oil, and then go around stamping it on everything you own and everywhere you go?


This is the current case with SSN's in North America. A unique identifier being treated as a password cannot act as a password.


But you have 10 of them (20 counting toes). Why can't you just cycle through them as needed?


"Press your toe here" has to be some of the worst ux ever


I see that you haven't used git.


You can just keep your socks on to protect your login toeprint when not in use.


furthermore, you can combine your fingers/toes in specific unique combinations. it's foolproof


I like it. Two factors in one! Even allows panic passwords.


And let's not forget that some people don't have fingerprints


No, in webauthn they don't, because the biometric data is never sent to the server. The test is done locally, and the server can trust it (if it cares) by verifying a cryptographic attestation of the authenticator's capabilities. All that's sent to the server is a public key and a private key signature.


How do two parties mutually verify, authenticate each other online?

I'm struggling to remember the protocol from a sci-fi novel where two secret agents who (separately) had their minds transferred to new meat sacks reconnected in a new (hostile) environment.

I think it had three parts: What you have, what you know, what you are.


> How do two parties mutually verify, authenticate each other online?

We verify the server's identity though it's public certificate that's signed by a certificate authority. The server can verify the client's identity via a public client certificate that's signed by an authority the server trusts. It's already possible to do this over a TLS connection.


Sorry, I wasn't being clear.

If the finger print is what you are, and password is what you know, then what is the "what you have"?

Mostly I'm curious if that sci-fi books' "three factor auth" scheme (because I don't know what else to call it) is a feasible model.


> If the finger print is what you are, and password is what you know, then what is the "what you have"?

One possible form of "three factor auth" would be to use a passphrase for the private key, the client certificate to connect with the server over TLS, and a username/password login at the application level.

The certificate/certificate fingerprint is what you have, the password and passphrase are two things that you know. I don't know what would fit under the "what you are" category though (unless you're considering some sort of biometric based method).


Two humans who know each other can use the Socialist Millionaire's Protocol, this does some fancy mathematics to prove they were both thinking of the same number (for Bill and Ted this would be "69, dude") and of course we can encode any answer e.g. "Sarah", "Washington", "Lakers Game Six" as a large number trivially. The SMP would be weak if you could iterate it many times, but it's for humans, after "Ted" guesses 4, 19, and 22, Bill will stop asking and assume it's not Ted at the far end.

A machine can obtain Certificates from a CA which show the CA validated its identity, and use Public Key Cryptography to prove this is its certificate. This is how HTTPS works when you connect to a remote web site, but it can be mutually authenticated too, that's just not how web browsers use it.


> https://www.w3.org/TR/2018/CR-webauthn-20180320/#sctn-cose-a...

I'm not feeling great about these algorithm choices. Namely: RSA with PKCS1v1.5

See also: https://robotattack.org

EDIT: Also, elliptic curve digital signatures with a random nonce (here, "r") instead of deterministic nonces makes me feel itchy.

https://fidoalliance.org/specs/fido-uaf-v1.1-id-20170202/fid...


Link to the actual standard: https://www.w3.org/TR/2018/CR-webauthn-20180320/

The "Use Cases" section in particular does a pretty good job of explaining how this would work from a user's perspective: https://www.w3.org/TR/2018/CR-webauthn-20180320/#use-cases


Client-side TLS certs have been a thing approximately forever, yet everyone is hesitant to use them because... installing a cert in your browser is hard, I guess?

Many corporations and agencies use certificates either as the primary (only) auth method, or as supplementary auth. For example, DTrade, the US defense export licensing system: http://www.pmddtc.state.gov/DTRADE/index.html

We use them at work and they work great. Pretty much instant auth to any internal site with no shenanigans.


I used them for an extranet system I built around 10 years ago. Certificates are a total PITA, and users hate them - they need to be provided with step by step instructions on how to install them, and they still manage to struggle with it. Plus the procedure differs between browsers. And they generally have private key export disabled, so you can only use them on the machine you first installed it to.

Oh, and some companies have things so locked down that users can't install client certs at all!

Things have barely changed here, so I avoid certs for users - having said all that, they work great for authentication between software components where there are no pesky users involved.


There is a way to "offer" a client cert bundle to a browser, where the browser would simply ask the user whether they want to accept the personal cert into their trust store. Pretty smooth process.

As for downsides, yes, no exporting: users have to go through the enrollment process with each new device.


> Pretty smooth process

You wouldn't say that if you'd had to support our users!

At least when we created the system, the method of private key generation differed from browser to browser. Then we had to sign their CSR in our offline PKI system and email them a password-protected PKCS#12 certificate bundle, and send them the password out-of-band. Finally, they had to import the bundle into their browser. It could have been simpler, but "big corporation" :(


Either this description is wrong, or this process is crazy, or both.

The whole _point_ of a CSR is that you don't need any secrets. The CSR proves that its creator has some private key in their possession and wants a certificate. You need to verify that whoever presented the CSR is really who they said they are (this is hard, hence ACME for the Web PKI, but it's out of scope for the Public Key Crypto problem) and then you issue a Certificate, which isn't secret.

Emailing PKCS#12 bundles (which have a key inside them) is asking for trouble but more importantly here: if the user already has a perfectly nice key, why are you sending them a new one ?


Why wouldn't you be able to export the key? Is this something that is "enforced" by the user interface of some browsers?


There's a flag that can be set (I don't remember if it's in the cert itself or during the import process) that makes the key un-exportable.

The idea is each cert identifies a device. There's no reason to not just issue one cert per device, especially if you make the enrollment process fairly painless. If a user loses their phone, it's very nice to not have to revoke their one global cert -- just revoke the one that was used on the phone and be on your way.


Yes, it is hard, and the UX is horrible.

Google proposed, and Chrome implements, Origin-Bound Certificates (http://www.czeskis.com/research/pubs/tls-obc.pdf) which are just about as good and with no UX impact. Section 8 covers many of the problems with TLS client certs.

AFAIK Chrome has the only implementation of OBC.


They are also not trivial for webdevs to implement, since the webapp usually doesn't terminate the TLS connection itself.


The web server usually handles that side of things.

The big problems you have are private key generation, certificate distribution, revocation, renewal, and guiding your users on how to install the damn things.


So now your authentication logic lives in your Nginx config. That's not great, and it's not where it should be.


So now your authentication logic lives in your Nginx config, a rather odd language with it's own fair share of quirks. That's not great and it's not where the logic conceptually fits.


You generally setup the reverse proxy to pass the required information to the backend via some trusted mechanism (e.g. HTTP headers not settable through client requests).


Wouldn't that make it easier? In an nginx use case, nginx handles verification and termination, and just passes a user identifier variable back to the origin.


Is that really an issue? AFAIK on nginx you can have it forward the certificate in a header.


It's an issue if you use a CDN or other platform-provided TLS terminator; they don't usually support client certificates. And even if you _can_ support it in your reverse proxy and put the certificate details in a header, that's a very different server-side interface from terminating the client TLS connection in the service itself.


Client certs have a problem Web Authn doesn't have which is they enable tracking, very trivially.

With client certs, I sign into Facebook, Hacker News, LWN, and all three of them get told "This is tialaramex, serial number 183749265". So does LegitAds, and XHamster, and Penny Arcade. Which is great for advertisers, but probably not what most users want.

With this technology, each site gets a consistent but site-specific identity. So they know it's still "me", but they have to collaborate out of band to discern that all these people are the same person, as they would without any authentication framework.

The requirement for user action to authenticate has a benefit here too - advertisers even if they have Facebook's permission to check which Facebook user I am, daren't risk saying "Er, hi, can you just sign into Facebook for us?" because it's obvious what's going on. Only the places which already explicitly need authentication will ask.


Since it's at the TLS level, the frontend/load balancer would have to deal with user authentication, which can be challenging.


And then how do you logout? How do you handle people wanting to login to a shared computer? (like in a conference room)


No logging out. You're no longer authenticating "users", you're authenticating "devices". This is in a corporate setting, of course, where if there's a shared computer each user logs in to their own environment via LDAP/AD/whatever.


The system we're building (and selling to big US administration groups) uses certificates to authenticate. I've never found installing a certificate hard.


Unfortunately client certs (broken as the ux was) is headed out of browsers :/


Do any mobile browsers even support them?


This week's Risky Business has a pretty long segment on this: https://risky.biz/RB494/

I think it's a less bad alternative for people using the same password on everything. As many comments here already pointed out, it's not perfect. I don't feel like I know enough about it to know just how good it will be for the average person.

See also: that SQRL thing Steve Gibson has been working on for years?


I've never talked to anyone who took SQRL seriously, or even seriously looked at it for flaws. The problem with secure login isn't that you can't invent some random ad-hoc crypto protocol to log into sites with; it's that what you come up with has to be so credible that lots of sites, and eventually browsers, will implement it. SQRL is not that.


SQRL is actually a reasonably well-designed authentication system; certainly better than passwords at least. But as you said, few if any sites ever bothered to implement it. There's just no incentive for sites to adopt a new authentication method when passwords already "work fine" from their perspective.

Hopefully the web authentication standard won't suffer the same fate. It _is_ backed by several major companies (Google, Microsoft, Mozilla) so at least they'll be able to kickstart adoption by implementing the API on their own sites.


SQRL has been debunked. Google quickly returns https://security.blogoverflow.com/2013/10/debunking-sqrl/ but there are plenty other critical reviews.


Yes, I've seen that post. It's very misleading, to the point of being flat-out wrong.

> Authentication and identification is combined

This is plain false. There's nothing in the spec that says you can't give a site your email address, and there _is_ a built-in way to revoke credentials in the spec.

> Single point of failure

This is dumb. Password managers have exactly the same flaw and nobody seems to have a problem with that.

> Social Engineering attacks

This true, but only in exactly the same way that password managers are vulnerable. (If the user is for whatever reason not using a browser plugin to authenticate, _and_ you can trick them into entering their info on the wrong site.)


Thanks for a thorough reply. I'll take a new look at it, even though it's pretty clear SQRL is long dead.


I think SQRL is pretty silly and not an especially well-designed protocol but this debunking post is not great either.


Why? What makes it well-designed? To me, it looks like a sort of especially clunky marriage of SRP and one of those deterministic password managers, with an off-putting extra QR code step.


The QR code is just for when you want to authenticate on a computer you don't have the client software installed on. Otherwise it pretty much functions like an improved version of a password manager that uses public key authentication instead of bearer tokens.


What's "improved" about this? What makes this better than SRP and a password manager?


Never heard of SRP before, so I have no idea how it compares. As for password managers, SQRL has the following advantages:

1. Easier to set up new accounts. No fiddling with onerous password requirements or text boxes; just one click and you're done.

2. More secure and/or convenient than password managers when signing in on a public computer (scan QR code with phone vs. load entire password DB onto computer or manually retype password via keyboard)

3. Better recovery against the worst-case scenario of database leak (SQRL client can transparently and automatically rotate your credentials, vs having to do it all manually with a password manager, and there's no list of sites you have an account on that the attacker can use against you)

4. Possible for sites to enforce the use of SQRL, whereas password managers need a password field to function, thereby encouraging users to continue insecure practices like weak passwords and password reuse.

5. Public keys instead of bearer tokens means you don't have to worry about rotating credentials if a site leaks its database.

Like I said, it's pretty much just a password manager, but better.


SQRL's biggest advantage over other proposals for a password replacement was that it didn't need buy-in from browser vendors to work.

At this point though it seems pretty likely that the Web Authentication Standard has successfully overcome that barrier. It's already partially implemented in multiple browsers (Chrome, Firefox) and backed by a W3C Candidate Recommendation. In the face of that, I don't believe SQRL can compete.


> I don't feel like I know enough about it to know just how good it will be for the average person.

It's the successor of the work Google did with Yubico that eventually led to Fido U2F. Google did some user research about the rollout amongst their 50k employees at the time: [Security Keys: Practical Cryptographic Second Factors for the Modern Web](http://fc16.ifca.ai/preproceedings/25_Lang.pdf).


Hey look, client certs!

The primary innovation here is browser buy-in, and API hooks for onboarding to make the UI/UX of the process suck less. This is a good thing.


Client certs always prove that niftich is niftich. This _might_ be fine, but most people aren't comfortable giving the same credentials to their bank, Facebook, and RedTube.

This thing has a masking step, so you always prove to any particular site that you're the same person you were the last time you visited. But you don't (deliberately) present them with any clue as to who that is exactly.

There's even a deliberate choice where though you can specifically identify models of product [ e.g. "This is a Mattel Barbie brand authenticator" so that in principle a bank could agree to use this standard but they only want to accept official bank-branded tokens ] the specification says DO NOT put unique serial numbers inside the tokens, if you want to distinguish them, make sure only large runs are distinguished e.g. "Batman authenticator" versus "Barbie authenticator" but never distinguish "Steve's authenticator" versus "Dave's authenticator" so as to preserve the privacy mechanism.


Just to be clear: The tokens _can_ contain unique serial numbers in some vendor-specific format - but they MUST NOT be in the attestation certificate, so the WebAuthn API will never expose them to the server.


I'll start to relax when every forum and shopping cart doesn't demand I save a password to create an account with them.

OpenID was supposed to help here. But even stalwarts like Stack Exchange are backing off from it, leaving a handful of proprietary links and their own OpenID provider.

Of course this is going to help with higher complexity authentication methods (retinal scanners in the 2028 Macbook Pros?) but it's all for naught unless we can patch every PHPBB and Magento clone to stop being so needy when it comes to passwords.


Whatever system you come up with, we have to ask these questions:

1. What if I lose my finger/password?

2. What if someone else gets a copy of my finger/password?

In both cases, fingerprints seem to make things worse, not better. The fingerprint is great if someone else has to be present to watch you put your finger down.


Not sure if anyone has linked the Mozilla page yet, but https://developer.mozilla.org/en-US/docs/Web/API/Web_Authent... seems to be a good guide for developers.


While I applaud the hard work the W3C group has put into this, and the excellent spec, I can't help but think that they are solving a problem that doesn't exist.

Attackers are much more likely to assume a user's identity and thus acquire the user's certificates for such a system. That's how most compromises today happen.

The central issue is verifying a claimed identity - not just at enrollment time but also on every use. The spec waves that off to 'Authenticators' (see below). But that's not a solution. If user authorization to certificates are based off passwords, for example, then what prevents an attacker from taking compromised data about me and using that to get access to passwords?

Don't get me wrong - this is good work. But I just think everyone seems to be focussing on the wrong end of the stick.

=======================

User Verification

The technical process by which an authenticator locally authorizes the invocation of the authenticatorMakeCredential and authenticatorGetAssertion operations. User verification MAY be instigated through various authorization gesture modalities; for example, through a touch plus pin code, password entry, or biometric recognition (e.g., presenting a fingerprint) [ISOBiometricVocabulary]. The intent is to be able to distinguish individual users. Note that invocation of the authenticatorMakeCredential and authenticatorGetAssertion operations implies use of key material managed by the authenticator. Note that for security, user verification and use of credential private keys must occur within a single logical security boundary defining the authenticator.


It _does_ solve a real problem. Several actually: password reuse, weak passwords, and phishing. If this catches on, it'll pretty much eliminate all leading causes of account compromise on the modern web. That's huge!

However, as you said, it seems like the other piece of the puzzle (the authenticators) isn't fully baked yet. U2F is one existing option, but that's not really suitable as a password replacement all on its own (it's just "what you have"). Hopefully Google, Mozilla, and Microsoft (all of whom contributed to development of this standard) have some ideas for standard cross-browser, cross-platform authenticators that provide the other authentication factors.


It doesn't actually. If the authenticator expects passwords, then this simply moves the problems you cite to another part of the system. These are the sources of compromises on the modern web. It's a neat system - but it doesn't solve the key issue.


Doesn't really matter if the authenticator expects a password, because that password _isn't_ what's used to authenticate to the site itself; it's always a public/private key pair.

If the user decides to use 12345 as his password for a Web Authentication authenticator that's obviously less than ideal, but still impossible to phish (the browser validates the domain the user is on), infeasible to brute force without access to the user's device (where the actual key pair is stored), and impossible to compromise in a data breach (the site only has a public key).


It's the weakness of single-sign on - if the authenticator is compromised, then the attacker now has the key that controls access to all the keys.


Sort of. Except the attacker has to compromise each user individually; there's no centralized server to target as is the case with existing single sign on services.


This already happens - it's called spear phishing. Instead of wasting time with multiple users, the attacker picks a high-value target. For example, if you can get one admin, it's game over.


In this thread, you keep moving the problem statement to cover a smaller and smaller subset of the general problem of account compromise. This is exactly what the attackers you're describing will do.

That's the point of this solution. If we can remove the vast majority of account-compromise tactics and force attackers to achieve success only if they "get one admin," then that's an incredible victory -- especially for the large number of ordinary people who are not admins.


True. But spear phishing is also harder with this standard, since you can't just phish credentials; you need to actually compromise the user's authentication device. (Get root on the user's phone, physically steal their authentication token, etc.)


It's true that U2F/WebAuth doesn't fix e.g. a CSRF vuln or an XSS vuln where someone abuses legitimate credentials like a session cookie, but this does effectively beat phishing, and that's no small feat.


That's a misconception - I don't see where this helps with phishing.


Because the Web Authentication API is built in to the web browser and uses unique credentials for each origin, it's not possible for a user to authorize a login session for a domain that's different from the one they're _actually_ browsing at the time.

If I visit g00gle.com and sign-in using the Web Authentication API, my browser is going to use my credentials for g00gle.com, not for google.com; unlike me, it _can't_ be fooled by similar-looking characters.


> If I visit g00gle.com and sign-in using the Web Authentication API, my browser is going to use my credentials for g00gle.com, not for google.com; unlike me, it _can't_ be fooled by similar-looking characters.

In the age of punycodes this has become particularly important because the human eye cannot visually distinguish between ASCII and punycode lookalikes - many are visually indistinguishable in many fonts.


webauthn helps with phishing in the same way that U2F does: by binding the keys to specific domains/origins. A phishing site hosted at gmailsecurelogin.com can still steal your password, but the security key is not going to produce anything that's usable on mail.google.com as a second factor.


Not disagreeing. Note however that I'm talking about the authenticator. If that is compromised, then the attacker now has access to all keys.


I'm not exactly sure what your threat model is, then. The typical way this gets deployed is with U2F USB devices with non-extractable keys. "Access to all keys" is not exactly technically feasible.

If your concern is "well, there's always the admin/support backdoor", i.e. a compromised admin account or a social engineering attack on support personell could lead to attacker-controlled keys being enrolled, I'm afraid that's not really something you can solve by just throwing new technology at the problem. However, you'll definitely make it harder to even get to the point where admin accounts are compromised by rolling out U2F or webauthn.

Personally, I'm perfectly fine with a technical solution that solves phishing for everything but the most advanced social engineering campaigns.


If an attacker compromises the credential, the credential is useless, yes? The credential is still customized _per origin_, so you'd have to decap the U2F key to extract the secret. This hardware is specifically designed to make that hard to do.


All keys, for one person

That is not great, but it's much much better than the 500 million people in the Have I Been Pwned database.


I'll add in to what has already been said by the fact that this should eliminate relay attacks completely as well assuming you have already visited the site before. The site presumably has a copy of your public key so they can send you messages you can only read with your private key and since you have their public key as well, no MITM is possible.


This attitude slows down progress. Perfect is the enemy of good enough.

Instead of telling how this solution is not perfect (no solution is perfect) tell us how this solution is worse than our current situation. The fact is, it's not. It's actually better in a number of ways as other comments have pointed out.


> But that's not a solution. If user authorization to certificates are based off passwords, for example, then what prevents an attacker from taking compromised data about me and using that to get access to passwords?

Maybe I'm misunderstanding your point, but an attacker will also need physical access to your authenticator. That's a much less scalable attack vector.

As for a real problem, others pointed out phishing already. U2F and Webauthn aren't just a certificate management solution, but a protocol for the user agent to make trusted statements to the server.


Attackers don't bother stealing physical tokens - way too hard. It's much easier to ask for token re-issue using compromised info. See for example https://www.techlicious.com/blog/phone-porting-scam-can-empt...


Right, if a service provider is willing to register new public keys/certs (or any kind of credential) on your account as a result of identity fraud, that is indeed a different problem. But not nearly as big as phishing and password reuse.

So, I don't see how the presence of that problem suggests it's not worthwhile to work on better types of credentials.


If one link of the chain is broken, it's game over. While this is a good effort, my point is that not considering the entire system as a whole simply transfers problems from one part of the domain to another.


You are right in a sense. Right now the problem is up to individuals to create secure passwords for each and every website they interact with. People that have no idea how computer security works. The problem now is also on individual website developers to be trusted with those passwords and keep them secure.

This transfers the problem away from both of those groups. Individual no longer have to create multiple secure passwords and website developers no longer have to store those precious secrets. This is a huge improvement. You are correct that it's not perfect, but instead of security pros spending all their energy teaching people how to manage passwords, they can focus on the remaining problems that you point out.


> simply transfers problems from one part of the domain to another.

No, it completely eliminates several huge vulnerability domains. Just because it doesn't solve a different one doesn't make it bad.


What I would really like is a straightforward migration guide for those of us already using the FIDO U2F JS API, to help people migrate in a backwards-compatible way from the old, poorly-documented stuff to the new, currently poorly-documented but very soon better-supported stuff.

So far, https://www.imperialviolet.org/2018/03/27/webauthn.html is the closest; it’s good, but I’d love to have something with more straightforward instructions—“so you currently use U2F? Change this to this, that to that, do this to maintain compatibility and you’re roughly done”. We’ll figure it out for FastMail, but if the migration is straightforward and well-documented it’ll help all the web services out there to move sooner rather than later, which as a user I would very much like.


Even better, hardware authentication token + second factor is password. (something you have, something you know)

See https://github.com/bitid/bitid/

It would also probably be easier/(or at least not harder) for people than username and password. Just tell users to press the button and enter a password to sign in.


yes. this spec (u2f) makes the hardware part more feasible to implement without needing to enter an OTP. thus enabling 2fa very widely.


It also thwarts phishing and MitM attacks (assuming the browser is not evil), which OTPs do not.


The article is a bit light on specifics, but I hope that this (or something like it) comes to pass. I'm not convinced of the security of the current batch of biometric authentication schemes, but I have some confidence that the organizations developing this standard know enough about what they're doing to avoid the obvious blunders.


What do they get from moving public key auth from tls to some kind of web api?

To me this looks like a duplication of efforts


Whatever happened to BrowserID from Mozilla? [1]

[1]: https://github.com/mozilla/id-specs/blob/prod/browserid/inde...


It was discontinued and died.


Unfortunately, until Apple incorporates this into Safari, it's not going to be usable. We'll have to wait and see what Apple announces at WWDC to see if this is useful. FaceID+Password would be an excellent companion to this.


Apple does seem to be working on it: https://bugs.webkit.org/show_bug.cgi?id=181943


To test , use latest version of version , Fido U2F Key and go to https://webauthn.io/


I'd honestly just rather get hacked on 99.9999% of sites.

I absolutely hate getting told to go get my phone or go to my email.

This also makes me a lot more paranoid about losing my phone.


You don't have to use your phone. In theory you may also use a Yubikey


I had a yubikey. It died after a few months of simply lying on the shelf behind me. No idea why, but havent exactly been a fan after that.


Did you get it replaced? Seems like it should have still been under warranty after only a few months.


Brought to you by the same guys who decided giving opaque, untamperable access to your browser by random external parties was a good idea.

I think I'll pass.


Is it the html <keygen> tag v2, done right?


From a very high level, yes, although this solves a wider range of problems.


How does the browser know what the relying party is? Does the website provide a URL for the browser to communicate with?


Would be nice if this could somehow integrate with GPG and scdaemon so that you can use OpenPGP cards to authenticate...


sounds like something i proposed a while back:

http://blog.codesolvent.com/2015/07/why-not-signed-password-...



My take on this earlier this year: https://www.nullauth.org/


10 years after now, it is gonna change nothing. Cumbersome slowly developed standard.


Seems to overlap with U2F... can we extend that standard instead of rewriting it?


The Web Authentication Standard supersedes U2F: https://fidoalliance.org/fido2/


this reminds me to MS InfoCard/CardSpace https://en.wikipedia.org/wiki/Windows_CardSpace


Can't wait to use my OpenPGP card to authenticate with websites.


bit of a fluff piece for FIDO and Firefox. U2F already did what’s described. This just formalizes the spec into www.

https://www.w3.org/Submission/2015/SUBM-fido-web-api-2015112...


oops that was the 2015 version. the official standard is the same and just adds a “userHandle”. IOW this is not news.


Misleading. Most sites would be foolish to use a 2 factor(like a fingerprint), but not a password.


Indeed, especially when most computers and phones are literally covered in the target fingerprints.


The issue is not physical breaches by "dusting" fingerprint.


This comment was made before they changed the title. Previously it claimed you could log in without passwords, which is just plain wrong.


It depends on what the server chooses to support, but the spec is designed so that it will be possible to support login without a password. Instead the authenticator (e.g., phone or USB dongle) would locally ask for a PIN and/or fingerprint before unlocking the private key.


tl;dr: Instead of having a password for each website, we'll use our phones' built-in security (pin, pattern, etc) to log into websites


I have been hearing alternate sigin-in methods in last good 10 years , from biometrics to typing behaviour to human behavior and what not. Even a 99% accuracy is not enough for a security tech to mature, specially when it comes to credential management


Agree to disagree. These might not be production ready, but any novelty has to start somewhere before it becomes matures enough for wider adoption.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: