Say Lastpass, KeePass, and 1Password agree to support an open public-key auth protocol, where during signup if a site supports the protocol, your password manager will provide a public key instead of a password, and will then sign a challenge with that key during login.
- Progressive enhancement -- everyone doesn't have to switch at once. Switch if you already use a password manager and want to opt into better security. Start with power users and trickle down as the pattern establishes itself.
- Workflow -- my password manager is already necessary for me to log into most sites, so I'm already solving the problem of syncing the cert store everywhere I need it. My password manager is also already part of my UI flow whenever I'm asked for a new password. If anything this will simplify my life as a user, because server-side support will let my password manager offer better UI. (This would require some manual challenge response for the rare occasions I can't install the PM -- not sure how tricky that part would be.)
- Incentives -- supporting the protocol is a value add for password managers -- it's another way to get higher security by using the product.
I'm sure folks are ahead of me -- just tossing out this angle in case it's helpful.
Unfortunately, there doesn't seem to be a business model around making this better. I think that password manager companies would be shooting themselves in the foot by doing this too. The reason they exist is because this kind of solution doesn't currently exist.
At small to medium sized companies it isn't uncommon to host a number of self-hosted services, such as GitLab, MatterMost, some wiki, perhaps OwnCloud, the list goes on. Securing all these services takes some non-trivial effort, even if you manage to get all services talking to your local LDAP server (we did!). Only recently GitLab advised users of the self-hosted solution to upgrade ASAP due to a security issue.
To cut ourselves some slack, we placed all these services behind an Nginx proxy. That proxy is secured with client-side TLS certificates. So if you try to access https://chat.example.com without it, you just get a friendly error message (actually, you get a picture of Grumpy Cat saying 'no', but you get the idea). With certificate, you get the service you wanted to access. You still need to log on with the service, but that's usually just a matter of doing it once and ticking the 'remember me' checkbox or something similar. For our users it just works.
Generating new certificates and revoking old ones is fairly simple for the administrators (couple of scripts, ample documentation).
The arguments against public use still stand of course, but for this scenario it is a great solution.
Nginx for one, didn't have client-certificate support for proxying until fairly recently, and many HTTP proxies and tools still don't. Even when there is support, you might run into some interesting corners cases, since this is a niche functionality, e.g. if you combine an Nginx with an Azure-hosted web app. Both work great in isolation, but not together, do to the strange tricks Azure is doing with TLS renegotiation.
In the process of doing the exact same thing, and decided on client certs/mutual-auth.
Just have your service listing on 127.0.0.1 and have SSH listening on 0.0.0.0 and proxy in!
I was enrolled at the distance university of Hagen in Germany for a while and they require the use of client certificates for access to their online portal. There were clear instructions for how to create and use a client certificate but I suspect they have an advantage in that many of their students enrol in technical subjects or already have job experience. Compared to an ordinary website they also have the advantage that students HAVE to use the website and they're the only public distance university in Germany so there's no competition.
From a user perspective the client certificate is incredibly cumbersome. It's a file on your computer, so you have to remember where you put it and move it to new devices if you want to use it there too. It also means you're more likely to misplace or lose it though you're probably less likely to leak it compared to a password.
The instructions also largely boiled down to "use Firefox". In Germany Firefox has a huge market share and is widely deployed as the alternative browser in the public sector (although IE still exists due to contracts with intranet service providers). In other countries things look differently.
In Chrome the experience of using client certificates was even more convoluted and the university officially didn't support Chrome because apparently client certificates flat out didn't work in Chrome until fairly recently (i.e. a few years ago).
In terms of UX, creating and using password is trivial compared to creating and using client certificates. Of course this is mostly because most people do passwords wrong. Creating and using a secure non-guessable password is difficult (though services like 1password or lastpass have made it easier at the cost of adding a single point of failure) but it's still marginally easier than creating and using a client certificate.
The big difference though, is that insecure-by-default is not as big of a cost to a website or software as the bad UX of client certificates. Sadly the UX of client certificates likely won't get better in browsers unless more sites use client certificates -- so it's stuck in a Catch 22.
Some services allow you to do this today with device-specific passwords.
Chrome promises to drop it entirely in version 54, and Firefox is okay with removing it too. You're already warned in your developer console if you use the element.
I would love to see a revitalized version of this element, because client side authentication by way of certificates is really cool. It's interesting in comparison to the typical username/password auth ubiquitous today.
Not to mention that almost no one uses two-way SSL compared to standard SSL, making it very difficult to find good documentation and support for full two-way authentication. Most people assume SSL means server-only authentication and don't even realize client-authentication is possible. Many tools simply don't support it, or require obscure options to enable it. I found it difficult even to get a properly signed client certificate from a major CA, as the standard certs you get are marked for server authentication only.
BrowserID (Persona) solved some of these issues by issuing short-term certs to devices based on a login, and designing an API for logout, but even the organisation that specced it out (Mozilla) never integrated it into its browser, so it failed on usability grounds.
Both of those are really the same issue, and they boil down to 'only use a browser instance you own to use a secure site, and don't share ownership of browser instances.' That seems pretty reasonable to me: indeed, anyone who uses a shared browser for private communications has already lost, badly.
The upside of not logging out is never having to log in.
You're correct about the pain of managing certs across devices.
Indeed they have - many people rely on public libraries for internet access. I don't think we should disadvantage them any further.
For instance, "a close friend impersonates me on HN" is pretty strongly outside my threat model. And as mentioned for people whose only computers are shared computers, "someone installs malware on the computer at my shelter and spies on my tax return" is a preferable outcome to "I don't file a tax return" (but we still have authentication, because both of those are still much better than "my abusive ex, who is not allowed in the shelter, spies on my tax return").
You obviously haven't had the same kind of close friends that I've had.
Logging in with a different account or role is handled by the browser asking which cert you want to use.
Sure, but then you have to distribute physical tokens to people, and most people will not buy their own tokens. And once you've built a protocol to get a passphrase-secured cert from an arbitrary provider, you have the same problems the likes of BrowserID had with convincing browsers to adopt it. You also have to deal with building a protocol for updating certs as they expire, and other hard problems with key management.
In many EU countries getting citizen certificates is getting more usual in order to deal with government paperwork (taxes, forms, healthcare, subsidies, etc.) so now that an unified trust structure exists, maybe it can boost adoption also by browsers and websites.
Edit: here's an official FAQ on eIDAS. It explicitly mentions website authentication and browsers. https://ec.europa.eu/digital-single-market/en/news/questions...
There's also a big difference in where the certificate store is and which browsers share it. For example, on Windows the certificate store is managed using Internet Explorer and the same is also used by Google Chrome. Firefox, on the other hand, has its own certificate store (including trusted CAs). So even if you deploy a system to provision client certificates, non-tech users may find that the site does not work on a certain browser depending on which browser they did the initial certificate generation and import from.
Exporting and importing certificates into different browsers is quite easy for techies, but you'd have to provide step-by-step instructions with screenshots for others. And God forbid a browser/system's certificate management interface changes, and you'd have tons of tickets coming to support.
The best and most durable solution is really some RFID tags embedded in your wrists and forehead.
U2F originated from Google when they wanted better 2FA for their internal services and they partnered with Yubikey to create the hardware. In a two years study it has been shown to be faster to easy, less prone to user error and more secure. It's basically a client cert on a USB stick, but the standards allow for forms of other hardware as well.
U2F is a FIDO 1.0 standard, the 2.0 version is now being worked on by the W3C Web Authentication Working Group. Microsoft has launched support for a draft of this spec in Windows 10 and Edge under the 'Windows Hello' banner.
We have been doing this for years in .ee, and the user experience has been mediocre. And this is something we can not fix (browser vendors can). The future promises to be FIDO, but then again, this is a different model, that mostly addresses authentication ONLY.
I just started a weekend project to bring the hardware token based authentication to a state that it could be called "standardized", for the huge EU market, where very many citizens in different countries have a vetted PKI identity on the eID smart card.
Might be of interest. https://github.com/martinpaljak/x509-webauth/wiki/WebAuth The core of it is just a profile of OpenID Connect ID Token, with fresh browser extensions with native messaging support to facilitate actual communication to hardware tokens.
Yes I could figure it out. But what about my grandmother? She can handle passwords, but definitely not certs.
Not that this couldn't be fixed (as the client side), but no one is doing this.
A simplest form (equivalent of basic auth, but secure) is mere "ssl_client_certificates file.pem; ssl_verify_client on;" or equivalent - one just have to tell "ask for certificate" and "here are my trusted issuers".
I guess, the only thing that's probably lacking is pluggable modules for popular web frameworks (issuing certs and matching them to users in DB).
even a hardware token like a yubikey or CAC has a bunch of problems and they are by far the "best" way to do something like this.
UX is the other one. Chrome is removing support for <keygen>, and they have excellent arguments for why: https://groups.google.com/a/chromium.org/d/msg/blink-dev/z_q... (Essentially, the ability for a website to inject certs into the system cert store is super weird.)
And without <keygen>, the experience of installing certs is completely awful. Let alone the UX problems with expired certs, etc.
And almost none browser can deal with them correctly (or could not few months ago) - I'm looking at you Chrome, mobile Opera etc...
Apple Keychain can store certs (I believe), as can most password managers so there's that to help.
But, IMHO, the only way it could get widespread use is if the cert is stored on a physical token that you can connect to your different computers. In the style of the DOD CAC where the private cert never leaves the card itself. Back up the certificate before storing it on the card or USB stick, and then plug that into every computer you want to access. Downside: Without multiple tokens you can't use multiple computers at the same time (easily).
It is still clumsy and painful, and I doubt many users would volunteer for a similar approach.
Certificates? Most are vague at best about them. Does closing the browser window stop access? Can you share certs? If your laptop is stolen did certs get compromised? How do you deal with compromised certs? Etc, etc. Ask a generic user something like this and enjoy the answers.
This is slowly changing -- as more organizations switch to cert-based authentication more users get to know and trust them which can lead to wide adoption for personal use.
Source: I've done contracting for government and I've worked on a PoC to determine whether we could bring 2FA to a 10M people country at once.
You'd need to be able to maintain/deploy certs to all your devices in a way that's simple enough for non-technical users to understand; never mind the added requirement for safe private cert handling on each device.
Once you're outside of the browser accessing services, take banks for example, now I need a my browser to have the cert and my mobile apps individually to have those certs as well... or I have certs for the the browser and passwords for the apps (more complexity for the user). Sure my devices can have a cert safe or similar, but the apps/browsers would have to respect that sufficiently for it to be useful (hard enough to get my password manager to work with my phone apps well... certs... eek!)
Finally each browser, app, etc. may have it's own way of dealing with things... making for even more complexity.
I could go on.
Point is there's an awful lot of friction to make that work as simply as the less secure, but apparently socially acceptable, passwords we use today. Whether that should be "the way" or not is irrelevant... consumer choices include factoring in immediate ease of use, right or wrong.
I think there are better arguments for 2FA, since there is something approaching reasonable standards (most applications I encounter support Google Authenticator or that standard at least). You still end up with another ease of use issue, but that might a more surmountable one. (I do hate, though, that I have to use my 2FA on the device which I get my 2FA auth codes from... I understand why, but still...)
Of course, Hacker News is a start-upish sort of community... so maybe unified technology security management for consumers is the next big thing to be "disrupted". :-) Have at it!
A login through a QR code (basically a token) is just normal TLS with the same MiTM risk. Its just an application layer login.
If you're thinking of a protocol like Kerberos, then yes, you can derive a shared secret because there's a single-point-of-trust authentication entity (the KDC) which has knowledge of both your password and the server's password/key, and yes, your password certifies that you're talking to the right server (as long as the KDC is trustworthy). But that's not how TLS mutual auth works.
How is it the same thing? If it's the system I'm familiar with (the QRCode is basically a OTP for your phone), then they're no where near "basically" or even any at all the same.
"When you enable SSL decryption for your endusers, SSL-encrypted traffic is decrypted, inspected, and then re-encrypted before it is sent to its destination."
In my work deploying desktop software to institutions, often academic and medical, I had to disable client-side certs to allow for connections to be made.
Apparently, even though it should be technically more secure, client-side certs are so infrequently used, these types of gateway monitors block connections made with them!
Maybe not as surprising, but non-browser software making connections to servers with non-publicly-signed SSL certs (but embedded CA chains) were also blocked.
Posture checking and zero internal network trust really needs to take hold in these places. If people must tap TLS, they should do it via installation of software client side, not MITM.
If the client certs are provisioned correctly (and validated correctly on the server side), then they fundamentally should break corporate TLS/SSL gateways. For example, I would store the username in the client cert's EMAIL field when provisioning it, and then check on the server that the same user is authenticating, binding the client cert to a single user. The TLS/SSL gateway is then going to need to have each user's client cert on hand (with private key) if it wants to intercept the traffic.
The only way around I think would be if the TLS/SSL gateway (such as Forcepoint) gave the user a way to upload their client cert with its private key directly into the TLS/SSL gateway... hmm, I wonder if they already allow this.
p.s. I call these gateways "lan-in-the-middle" attacks.
Signing up at a site is just requesting a client cert at the site's private CA.
It requires a user agent (a browserplugin) on the client side. The agent keeps check of which certificates belong to what sites so it actively blocks MitM attacks.
Granted, if you need to share your certificates, you'd have to copy them over. For that, use the sync-feature of your browser or design something better. But synching is a separate concern, independent of the authentication protocol.
The specific issues are:
1. You have to install a client certificate on every device you want to use. And, you have to keep that certificate up to date. If you use multiple web browsers (for say UI development testing) you have to install and maintain the certificate it each one. MIT currently issues client certificates with a validity period of slightly less than one year. That makes for a lot of lost time every year for students, staff, and faculty, spent re-installing client certs.
2. The certificate can be stolen just like a password. But, there is no easy way for the client to revoke a stolen certificate. Many CRL list implementations are lacking or fully absent. There for, organizations that depend on client certificate authentication typically depend on certificate expiration to re-secure compromised client accounts. (See #1 above)
3. Client certificates are not supported by all web servers. The major players support it pretty well. But, there's been a proliferation of specialized, micro, and nano web servers over the last several years.
4. You have to invest in securing the signing key for the client certificates. This usually means a decent HSM which costs on the over of $x00,000USD. At MIT for example, there is a web site where anyone can go at any time to generate a new client certificate (again, see #1). This site needs to be able to perform signatures constantly which means the signing key needs to be accessible 24/7 online some how.
5. Proxies are a problem. If you try and terminate the TLS connection early, the client certificate related operations are not "proxied back". Some proxies like HAProxy will allow you to pass back environment variables set during the client certificate authentication process. But, that is obviously not the same as having the final destination webserver performing validation. This has become much more of an issue with the invention of ClouldFlare's TLS proxying CDN.
6. If you implement logic to expire certificates at the end of a customer's subscription or enrolment period, it can cause significant headaches with processes where it would be helpful to still be able to authenticate them. For example, if a customer's subscription to your SaaS site expires and you want them to be able to review with out inadvertently sharing details of their account with others. Or, if a student has graduated but they still need to pay some unpaid parking tickets. MIT runs into this issue often due to its use of client side certificates. If you extend their certificate well beyond the end of their system authorization, you have to put a lot of complex authorization code in all of your local apps and websites. While client certificates only provide authentication and not authorization, many implementers use client certificates for both simultaneously. This is especially true when protecting web content and web sites with client certificates.
One of the banking sites did as well but dropped it, now my gmail is more secure than my bank account since there is no 2fa on this bank.
The java applet approach must cause endless customer support requests
Another problem is how to manage your keys between devices.
If this would be offered as an extra option, just like Gmail has 2FA, that would be great!
Also it's because a phisical thing you lose easier than forgot a password (new/crashed/different computer, another browser, etc, etc)
But I agree it is a good solution in a 2FA for internal applications.
You really just want a password-protected client cert then (can be specific to the cert or FDE). Your in-brain password keeps someone who steals your computer from getting access and the client cert makes sure that an actor that can sniff the whole network and the ability to break TLS still can't impersonate you.