In light of NIST's recent announcement, and several articles about password managers, why aren't sites being designed with client certs as an authentication or at least 2FA mechanism?
Could we bootstrap this by starting with support in password manager browser plugins, rather than better UI support in browsers?
Say Lastpass, KeePass, and 1Password agree to support an open public-key auth protocol, where during signup if a site supports the protocol, your password manager will provide a public key instead of a password, and will then sign a challenge with that key during login.
Advantages:
- Progressive enhancement -- everyone doesn't have to switch at once. Switch if you already use a password manager and want to opt into better security. Start with power users and trickle down as the pattern establishes itself.
- Workflow -- my password manager is already necessary for me to log into most sites, so I'm already solving the problem of syncing the cert store everywhere I need it. My password manager is also already part of my UI flow whenever I'm asked for a new password. If anything this will simplify my life as a user, because server-side support will let my password manager offer better UI. (This would require some manual challenge response for the rare occasions I can't install the PM -- not sure how tricky that part would be.)
- Incentives -- supporting the protocol is a value add for password managers -- it's another way to get higher security by using the product.
I'm sure folks are ahead of me -- just tossing out this angle in case it's helpful.
I think Mozilla's Persona fixed this general problem even better. Client auth support could easily have been a part of that had they got further into the project.
Unfortunately, there doesn't seem to be a business model around making this better. I think that password manager companies would be shooting themselves in the foot by doing this too. The reason they exist is because this kind of solution doesn't currently exist.
I don't really understand the business model point - isn't it the same as the business model for any other browser improvement? Better browser -> more users -> ??? -> profit!
Similar ideas are the BrowserAuth stuff http://www.browserauth.net/ , which I think hasn't seen much activity, and FIDO https://fidoalliance.org/specifications/overview/ . FIDO is focused more on using some authentication system (either a biometric reader, or a Yubikey or similar token), but I think you can just use "I am logged into this account on this computer" as your client auth backend.
The arguments against client-side TLS certificates boil down to complex UI and maintainability, but it does have its uses.
At small to medium sized companies it isn't uncommon to host a number of self-hosted services, such as GitLab, MatterMost, some wiki, perhaps OwnCloud, the list goes on. Securing all these services takes some non-trivial effort, even if you manage to get all services talking to your local LDAP server (we did!). Only recently GitLab advised users of the self-hosted solution to upgrade ASAP due to a security issue.
To cut ourselves some slack, we placed all these services behind an Nginx proxy. That proxy is secured with client-side TLS certificates. So if you try to access https://chat.example.com without it, you just get a friendly error message (actually, you get a picture of Grumpy Cat saying 'no', but you get the idea). With certificate, you get the service you wanted to access. You still need to log on with the service, but that's usually just a matter of doing it once and ticking the 'remember me' checkbox or something similar. For our users it just works.
Generating new certificates and revoking old ones is fairly simple for the administrators (couple of scripts, ample documentation).
The arguments against public use still stand of course, but for this scenario it is a great solution.
I've made the same set-up us well for private services hosted on public IPs which we couldn't put behind a VPN, but I have to say admit this process is not without its own pains.
Nginx for one, didn't have client-certificate support for proxying until fairly recently, and many HTTP proxies and tools still don't. Even when there is support, you might run into some interesting corners cases, since this is a niche functionality, e.g. if you combine an Nginx with an Azure-hosted web app. Both work great in isolation, but not together, do to the strange tricks Azure is doing with TLS renegotiation.
> I've made the same set-up us well for private services hosted on public IPs which we couldn't put behind a VPN, but I have to say admit this process is not without its own pains.
In the process of doing the exact same thing, and decided on client certs/mutual-auth.
I do the same thing but with SSH. I like using SSH as the proxy because it supports other protocols besides HTTP and people generally already have SSH keys, and I don't have to manage another daemon just to provide authentication with yet another set of public/private keys.
Just have your service listing on 127.0.0.1 and have SSH listening on 0.0.0.0 and proxy in!
Client certs would be awesome but they're not for humans.
I was enrolled at the distance university of Hagen in Germany for a while and they require the use of client certificates for access to their online portal. There were clear instructions for how to create and use a client certificate but I suspect they have an advantage in that many of their students enrol in technical subjects or already have job experience. Compared to an ordinary website they also have the advantage that students HAVE to use the website and they're the only public distance university in Germany so there's no competition.
From a user perspective the client certificate is incredibly cumbersome. It's a file on your computer, so you have to remember where you put it and move it to new devices if you want to use it there too. It also means you're more likely to misplace or lose it though you're probably less likely to leak it compared to a password.
The instructions also largely boiled down to "use Firefox". In Germany Firefox has a huge market share and is widely deployed as the alternative browser in the public sector (although IE still exists due to contracts with intranet service providers). In other countries things look differently.
In Chrome the experience of using client certificates was even more convoluted and the university officially didn't support Chrome because apparently client certificates flat out didn't work in Chrome until fairly recently (i.e. a few years ago).
In terms of UX, creating and using password is trivial compared to creating and using client certificates. Of course this is mostly because most people do passwords wrong. Creating and using a secure non-guessable password is difficult (though services like 1password or lastpass have made it easier at the cost of adding a single point of failure) but it's still marginally easier than creating and using a client certificate.
The big difference though, is that insecure-by-default is not as big of a cost to a website or software as the bad UX of client certificates. Sadly the UX of client certificates likely won't get better in browsers unless more sites use client certificates -- so it's stuck in a Catch 22.
Doesn't surprise me. I always wondered how much support overhead it caused. I worked at the computer pool help desk of another university for a few semesters and it really opens your eyes to how alien some concepts techies find obvious can be for normal humans.
A protocol to have the site sign a certificate provided by the browser on signup/authentication would be nice. A UI to allow signing of pending certificate signing requests would allow you to authorize additional devices on demand, or revoke certificates/access for other devices.
Some services allow you to do this today with device-specific passwords.
It's not really deprecated, but it's so old and cumbersome and unsupported that the MDN page says "it is better to continue to consider this feature as deprecated and going away."
Chrome promises to drop it entirely in version 54, and Firefox is okay with removing it too. You're already warned in your developer console if you use the element.
I would love to see a revitalized version of this element, because client side authentication by way of certificates is really cool. It's interesting in comparison to the typical username/password auth ubiquitous today.
Because the UI is terrible, both within the browser and the tools to generate and manage certs themselves. I implemented client certs for an internal API service and I quickly learned that even among a company of generally very clueful people, no one knows really knows how PKI works. No matter how much I tried to document the steps to generating and installing keys, users would have problems, and I would usually end up just doing it for them. The command line tools for PKI are generally very unfriendly, and terms like "private key", "signed certificate", "keystore" and "truststore" are basically meaningless to 99.9% of people, including other developers.
Not to mention that almost no one uses two-way SSL compared to standard SSL, making it very difficult to find good documentation and support for full two-way authentication. Most people assume SSL means server-only authentication and don't even realize client-authentication is possible. Many tools simply don't support it, or require obscure options to enable it. I found it difficult even to get a properly signed client certificate from a major CA, as the standard certs you get are marked for server authentication only.
Client certs have dire usability in browsers - it's impossible to do things like logging out, or using a public machine reasonably, managing certs across all your devices is extremely difficult, etc etc.
BrowserID (Persona) solved some of these issues by issuing short-term certs to devices based on a login, and designing an API for logout, but even the organisation that specced it out (Mozilla) never integrated it into its browser, so it failed on usability grounds.
> it's impossible to do things like logging out, or using a public machine reasonably
Both of those are really the same issue, and they boil down to 'only use a browser instance you own to use a secure site, and don't share ownership of browser instances.' That seems pretty reasonable to me: indeed, anyone who uses a shared browser for private communications has already lost, badly.
The upside of not logging out is never having to log in.
You're correct about the pain of managing certs across devices.
Threat model! You've only lost to other people who have previously had access to that computer. Sometimes those people aren't within your threat model.
For instance, "a close friend impersonates me on HN" is pretty strongly outside my threat model. And as mentioned for people whose only computers are shared computers, "someone installs malware on the computer at my shelter and spies on my tax return" is a preferable outcome to "I don't file a tax return" (but we still have authentication, because both of those are still much better than "my abusive ex, who is not allowed in the shelter, spies on my tax return").
My employer is deploying client certs to company-managed devices (transparently via config management) and requiring them to hit company internal tools precisely to thwart this behavior.
If your certificates aren't stored in the clear (either passphrase secured or on a physical token), then you can solve the public machine problem and logging out. The next user has to know your passphrase or use it within a narrow time window (if the cert is on the system and not on a physical token).
Logging in with a different account or role is handled by the browser asking which cert you want to use.
> If your certificates aren't stored in the clear (either passphrase secured or on a physical token), then you can solve the public machine problem and logging out.
Sure, but then you have to distribute physical tokens to people, and most people will not buy their own tokens. And once you've built a protocol to get a passphrase-secured cert from an arbitrary provider, you have the same problems the likes of BrowserID had with convincing browsers to adopt it. You also have to deal with building a protocol for updating certs as they expire, and other hard problems with key management.
I know this is a US-based site, but as July 1st 2016 the eIDAS Regulation has come into force in all EU member states, creating a legal structure for electronic identification, signatures, seals, and documents throughout the EU. Adobe for instance supports EU Trusted Lists: https://blogs.adobe.com/documentcloud/eu-trusted-list-now-av...
In many EU countries getting citizen certificates is getting more usual in order to deal with government paperwork (taxes, forms, healthcare, subsidies, etc.) so now that an unified trust structure exists, maybe it can boost adoption also by browsers and websites.
I prefer client certificates to passwords when logging into intranet sites, which is just a couple of clicks as opposed to typing (password managers with auto-fill could also be used), but having a system to generate and provide the certificate is not simple. You would somehow have to authenticate and identify the user before you issue a certificate for that person. That "somehow" is usually using a username and password already provided to them (most likely for the system login, which is separate from other applications wanting to use client certificates). Installing directly from a issuing server to the browser is ideal, IMO. Other channels for providing the certificate come with more issues on the security and usability fronts.
There's also a big difference in where the certificate store is and which browsers share it. For example, on Windows the certificate store is managed using Internet Explorer and the same is also used by Google Chrome. Firefox, on the other hand, has its own certificate store (including trusted CAs). So even if you deploy a system to provision client certificates, non-tech users may find that the site does not work on a certain browser depending on which browser they did the initial certificate generation and import from.
Exporting and importing certificates into different browsers is quite easy for techies, but you'd have to provide step-by-step instructions with screenshots for others. And God forbid a browser/system's certificate management interface changes, and you'd have tons of tickets coming to support.
Does the Apple keychain handle certificates? Because I thought we'd all moved on from remembering two or three passwords to letting software manage and synchronize those. It's been working flawlessly for a couple years now. Although at that point, a long, unique password seems to be about the same as a certificate.
Anywhere also means from any device, not just ones you own and have set up with your iCloud account. Mind you, I'm not saying that it's a good idea, logging into anything from anywhere willy nilly.
Everyone could have their own certificate authority on their person in a small usb device. You could even generate one time login creds if you were on a public machine.
and is positioned in a way that frequent use of the USB ports was convenient. I'm thinking of 2 places I visit frequently where the computers are hidden under a big ol' desk
No, with 2FA, I need my password, and my phone. I only have to set up one device: my phone (or whatever other 2FA token). I don't have to go through my home computer, my laptop, my phone, my work computer, my friend's computer, etc. and set up my certificate everywhere just because I might need to log in to a website.
Since I didn't see this elsewhere - client certs have privacy concerns, because by the time you've done your handshake the other side knows exactly who you are. This pretty much rules out always-presented certs. Because of that, you would need to manage the certs more directly, and then you get into all of the UX issues around managing certs.
I would like to extend the question: is any startup/company working on something that would simplify the client certs process?
Why haven't we really moved beyond password-based logins yet?
I am surprised nobody has mentioned the FIDO Alliance and its Universal 2nd Factor, or it's successor work by the W3C. They're actively working on 'killing passwords'.
U2F originated from Google when they wanted better 2FA for their internal services and they partnered with Yubikey to create the hardware. In a two years study it has been shown to be faster to easy, less prone to user error and more secure[1]. It's basically a client cert on a USB stick, but the standards allow for forms of other hardware as well.
U2F is a FIDO 1.0 standard, the 2.0 version is now being worked on by the W3C Web Authentication Working Group[2]. Microsoft has launched support for a draft of this spec in Windows 10 and Edge under the 'Windows Hello' banner[3].
Client certificates are doomed, looking into the future (think: keygen tag). Especially if you are using hardware tokens, and if you want to use them in TLS client authentication.
We have been doing this for years in .ee, and the user experience has been mediocre. And this is something we can not fix (browser vendors can). The future promises to be FIDO, but then again, this is a different model, that mostly addresses authentication ONLY.
I just started a weekend project to bring the hardware token based authentication to a state that it could be called "standardized", for the huge EU market, where very many citizens in different countries have a vetted PKI identity on the eID smart card.
Might be of interest. https://github.com/martinpaljak/x509-webauth/wiki/WebAuth The core of it is just a profile of OpenID Connect ID Token, with fresh browser extensions with native messaging support to facilitate actual communication to hardware tokens.
Why would we move to client-certs? You need to have a file and sync it between devices.. You might as well use a password manager and sync it between devices..
I would argue the opposite, the only problem that has to be fixed is a method for distributing certificates, I use client certs for all my self hosted services, and a certificate is much more usable then typing passwords on any mobile device.
A simplest form (equivalent of basic auth, but secure) is mere "ssl_client_certificates file.pem; ssl_verify_client on;" or equivalent - one just have to tell "ask for certificate" and "here are my trusted issuers".
I guess, the only thing that's probably lacking is pluggable modules for popular web frameworks (issuing certs and matching them to users in DB).
What problems do you see with yubikey? I just finished deploying an app with FIDO 2FA using yubikey and it went pretty well. Had to restrict users to Chrome, though, but in my scenario this wasn't a problem. My biggest problem was adapting my (rspec/capybara) integration test suite. Do you know of issues that I may not yet have seen?
well I mean for broad adoption, in particular. For example, you hit on one of them. Users want to use all different browsers, restricting people to Chrome isn't viable for plenty of reasons. I think the biggest problem is that I have an iPhone sitting next to me and no way to use a yubikey with it. Many (most?) Android phones are the same way.
Privacy is the biggest problem - both sides of the connection present their identity simultaneously, so you leak your identity to a MITM. For server-to-server communication, that's fine. For person-to-website communication, the two sides are semantically asymmetric, and I don't want to prove to 104.20.44.44 that I am geofft until 104.20.44.44 proves to me that it's news.ycombinator.com.
UX is the other one. Chrome is removing support for <keygen>, and they have excellent arguments for why: https://groups.google.com/a/chromium.org/d/msg/blink-dev/z_q... (Essentially, the ability for a website to inject certs into the system cert store is super weird.)
And without <keygen>, the experience of installing certs is completely awful. Let alone the UX problems with expired certs, etc.
I had to abandon them because of HTTP/2. If one site on the web server uses TLS client auth, and then you go to another site on the same server you receive HTTP 421 Misdirected Request, because of connection reuse.
And almost none browser can deal with them correctly (or could not few months ago) - I'm looking at you Chrome, mobile Opera etc...
2FA would be nice, but for client certs, as others have said, the UI sucks. Once you have to share the same cert (or manage multiple certs on multiple computers) it becomes too cumbersome for most end users.
Apple Keychain can store certs (I believe), as can most password managers so there's that to help.
But, IMHO, the only way it could get widespread use is if the cert is stored on a physical token that you can connect to your different computers. In the style of the DOD CAC where the private cert never leaves the card itself. Back up the certificate before storing it on the card or USB stick, and then plug that into every computer you want to access. Downside: Without multiple tokens you can't use multiple computers at the same time (easily).
I was just about to say the same thing. The only cases where I have seen successful rollouts of client certificates have involved something like a CAC.
It is still clumsy and painful, and I doubt many users would volunteer for a similar approach.
It takes a lot of organization to make them work in DOD, but in my experience it is worth it. From a dev perspective they are actually easier than username/password, as the client certificate DN is available on the server, so you can know the identity of the user very easily.
It makes sense within the DoD environment since it is a controlled environment with card readers at every location and pre-installed software on stations. Not scalable to the general public.
I think most of it is the fact that users know how the passwords are supposed to work -- basic "do-s nd don't-s" (e.g., do not give them out to "tech representatives" calling your home, etc.), how to change them, etc.
Certificates? Most are vague at best about them. Does closing the browser window stop access? Can you share certs? If your laptop is stolen did certs get compromised? How do you deal with compromised certs? Etc, etc. Ask a generic user something like this and enjoy the answers.
This is slowly changing -- as more organizations switch to cert-based authentication more users get to know and trust them which can lead to wide adoption for personal use.
You'd need to be able to maintain/deploy certs to all your devices in a way that's simple enough for non-technical users to understand; never mind the added requirement for safe private cert handling on each device.
Once you're outside of the browser accessing services, take banks for example, now I need a my browser to have the cert and my mobile apps individually to have those certs as well... or I have certs for the the browser and passwords for the apps (more complexity for the user). Sure my devices can have a cert safe or similar, but the apps/browsers would have to respect that sufficiently for it to be useful (hard enough to get my password manager to work with my phone apps well... certs... eek!)
Finally each browser, app, etc. may have it's own way of dealing with things... making for even more complexity.
I could go on.
Point is there's an awful lot of friction to make that work as simply as the less secure, but apparently socially acceptable, passwords we use today. Whether that should be "the way" or not is irrelevant... consumer choices include factoring in immediate ease of use, right or wrong.
I think there are better arguments for 2FA, since there is something approaching reasonable standards (most applications I encounter support Google Authenticator or that standard at least). You still end up with another ease of use issue, but that might a more surmountable one. (I do hate, though, that I have to use my 2FA on the device which I get my 2FA auth codes from... I understand why, but still...)
Of course, Hacker News is a start-upish sort of community... so maybe unified technology security management for consumers is the next big thing to be "disrupted". :-) Have at it!
Doesn't Google handle it's own internal IT services in this way? You don't log into a domain, every internal IT service is available on the web and not behind a firewall, but requires a company provided cert to access? This always seemed better to me than trying to secure a perimeter around resources anyway, usability considerations notwithstanding.
https://web.whatsapp.com/ only allows login through a cert (via a QR-code). Some banks also use this. A smartphone is used a a cert vault. Not a client cert in the traditional sense, though it's basically the same thing.
Nah it's not the same thing. Client Authenticated TLS provides a mutually authenticated channel. Mutually authenticated channels cannot be man-in-the-middled. The auth is happening at the transport layer.
A login through a QR code (basically a token) is just normal TLS with the same MiTM risk. Its just an application layer login.
I don't understand the security argument you're making. Are you claiming that, if I use client certs, I am protected against a rogue CA issuing a fake certificate for web.whatsapp.com? How?
If you're thinking of a protocol like Kerberos, then yes, you can derive a shared secret because there's a single-point-of-trust authentication entity (the KDC) which has knowledge of both your password and the server's password/key, and yes, your password certifies that you're talking to the right server (as long as the KDC is trustworthy). But that's not how TLS mutual auth works.
I've just set that up, thanks - the UX is brilliant, exactly what's needed to increase adoption. Of course, it requires that you've gone through the WhatsApp phone app setup, but I'm sure this model could be applied on an equivalent system - especially as smart phones are almost ubiquitous now.
How is it the same thing? If it's the system I'm familiar with (the QRCode is basically a OTP for your phone), then they're no where near "basically" or even any at all the same.
Some companies use Forcepoint's TLS inspection to monitor all outbound TLS/SSL traffic. I suspect client certs break that. Probably that's a good thing, but if your customers work inside a company using Forcepoint, well, those won't be your customers from 9AM - 5PM their time.
"When you enable SSL decryption for your endusers, SSL-encrypted traffic is decrypted, inspected, and then re-encrypted before it is sent to its destination."
In my work deploying desktop software to institutions, often academic and medical, I had to disable client-side certs to allow for connections to be made.
Apparently, even though it should be technically more secure, client-side certs are so infrequently used, these types of gateway monitors block connections made with them!
Maybe not as surprising, but non-browser software making connections to servers with non-publicly-signed SSL certs (but embedded CA chains) were also blocked.
This makes me sad. Academic and medical sites should really be the last places using TLS breaking gateways. Mainly because staff frequently have to sign legal confidentiality and data access agreements granting them personal access to other institutions patient data rather than having an institutional agreement.
Posture checking and zero internal network trust really needs to take hold in these places. If people must tap TLS, they should do it via installation of software client side, not MITM.
If the client certs are provisioned correctly (and validated correctly on the server side), then they fundamentally should break corporate TLS/SSL gateways. For example, I would store the username in the client cert's EMAIL field when provisioning it, and then check on the server that the same user is authenticating, binding the client cert to a single user. The TLS/SSL gateway is then going to need to have each user's client cert on hand (with private key) if it wants to intercept the traffic.
The only way around I think would be if the TLS/SSL gateway (such as Forcepoint) gave the user a way to upload their client cert with its private key directly into the TLS/SSL gateway... hmm, I wonder if they already allow this.
p.s. I call these gateways "lan-in-the-middle" attacks.
I've designed this protocol[1] to make client certs work easy for end users.
Signing up at a site is just requesting a client cert at the site's private CA.
It requires a user agent (a browserplugin) on the client side. The agent keeps check of which certificates belong to what sites so it actively blocks MitM attacks.
Granted, if you need to share your certificates, you'd have to copy them over. For that, use the sync-feature of your browser or design something better. But synching is a separate concern, independent of the authentication protocol.
We've been using client certificates at MIT for more than a decade. Many people like them. But, most people hate them. As others said, the usability is the primary issue.
The specific issues are:
1. You have to install a client certificate on every device you want to use. And, you have to keep that certificate up to date. If you use multiple web browsers (for say UI development testing) you have to install and maintain the certificate it each one. MIT currently issues client certificates with a validity period of slightly less than one year. That makes for a lot of lost time every year for students, staff, and faculty, spent re-installing client certs.
2. The certificate can be stolen just like a password. But, there is no easy way for the client to revoke a stolen certificate. Many CRL list implementations are lacking or fully absent. There for, organizations that depend on client certificate authentication typically depend on certificate expiration to re-secure compromised client accounts. (See #1 above)
3. Client certificates are not supported by all web servers. The major players support it pretty well. But, there's been a proliferation of specialized, micro, and nano web servers over the last several years.
4. You have to invest in securing the signing key for the client certificates. This usually means a decent HSM which costs on the over of $x00,000USD. At MIT for example, there is a web site where anyone can go at any time to generate a new client certificate (again, see #1). This site needs to be able to perform signatures constantly which means the signing key needs to be accessible 24/7 online some how.
5. Proxies are a problem. If you try and terminate the TLS connection early, the client certificate related operations are not "proxied back". Some proxies like HAProxy will allow you to pass back environment variables set during the client certificate authentication process. But, that is obviously not the same as having the final destination webserver performing validation. This has become much more of an issue with the invention of ClouldFlare's TLS proxying CDN.
6. If you implement logic to expire certificates at the end of a customer's subscription or enrolment period, it can cause significant headaches with processes where it would be helpful to still be able to authenticate them. For example, if a customer's subscription to your SaaS site expires and you want them to be able to review with out inadvertently sharing details of their account with others. Or, if a student has graduated but they still need to pay some unpaid parking tickets. MIT runs into this issue often due to its use of client side certificates. If you extend their certificate well beyond the end of their system authorization, you have to put a lot of complex authorization code in all of your local apps and websites. While client certificates only provide authentication and not authorization, many implementers use client certificates for both simultaneously. This is especially true when protecting web content and web sites with client certificates.
Really interested in adoption of nfc smart cards. Still, it's just not possible now - you'd need hardware, software support on basically every platform. Looking forward though
Revenue website here in Ireland does this, you need to have a cert which is read by a java applet, cert is issued after you use a one time password mailed to your home from what i remember, they expire every so often and need to be renewed.
One of the banking sites did as well but dropped it, now my gmail is more secure than my bank account since there is no 2fa on this bank.
The java applet approach must cause endless customer support requests
I've done this once, as a client. I believe it was with startssl.com. Somehow I messed it up, I don't know, but in the end it was quite troublesome to get it working. And then I'm a pro user, so I consider myself someone who should be able to get this working.
Another problem is how to manage your keys between devices.
If this would be offered as an extra option, just like Gmail has 2FA, that would be great!
Client certificates are harder to put in place than a login form. It requires to create the certificates and install them on every device which will log to the website. And you will have to manage the changes/lost of devices.
But I agree it is a good solution in a 2FA for internal applications.
A password is safely stored in my brain and 100% portable. I can use it from whatever location. If I lose my computer, the password is not in the hands of bad guys. Not so with a client cert.
> If I lose my computer, the password is not in the hands of bad guys.
You really just want a password-protected client cert then (can be specific to the cert or FDE). Your in-brain password keeps someone who steals your computer from getting access and the client cert makes sure that an actor that can sniff the whole network and the ability to break TLS still can't impersonate you.
Say Lastpass, KeePass, and 1Password agree to support an open public-key auth protocol, where during signup if a site supports the protocol, your password manager will provide a public key instead of a password, and will then sign a challenge with that key during login.
Advantages:
- Progressive enhancement -- everyone doesn't have to switch at once. Switch if you already use a password manager and want to opt into better security. Start with power users and trickle down as the pattern establishes itself.
- Workflow -- my password manager is already necessary for me to log into most sites, so I'm already solving the problem of syncing the cert store everywhere I need it. My password manager is also already part of my UI flow whenever I'm asked for a new password. If anything this will simplify my life as a user, because server-side support will let my password manager offer better UI. (This would require some manual challenge response for the rare occasions I can't install the PM -- not sure how tricky that part would be.)
- Incentives -- supporting the protocol is a value add for password managers -- it's another way to get higher security by using the product.
I'm sure folks are ahead of me -- just tossing out this angle in case it's helpful.