Hacker News new | past | comments | ask | show | jobs | submit login
Why is nobody using SSL client certificates? (pilif.github.io)
241 points by cottonseed on July 11, 2013 | hide | past | favorite | 153 comments



I'm the author of this post which I have written in 2008. Since then my views have changed a bit: for one, the added security is debatable as the client cert is easily accessible to malware and thus could easily be stolen.

The other thing is that renegotiation is somewhat broken since that related security flaw in 2010ish, so you can't have parts of a site accessible without client certs and other parts requiring a cert.

This has been fixed since, but I don't know how well it's supported by clients.

Finally, compared to 2008, we're using many more devices for accessing various web applications now. Having the client cert bound to one browser on one machine is annoying and synchronizing it truly securely is probably impossible.

I do stand by the horrendous UI though. Even if the other issues weren't there, the UI certainly kills client certificates for normal users.

Then again, crypto UI is hard and I'm afraid the incentive for redesigning this is pretty much zero because nobody is using it to begin with and because it's flawed anyways.


the malware point is moot.

malware can steals passwords saved, which are optionally saved encrypted. the cert at least in the one browser i checked is always encrypted.

and even then, if you have malware in your box, any and all security measures are worthless at this point.

decent Renegotiation and mobile client support will come if people using that on the desktop request the feature. it's like that for everything for a while now.

Managing the devices and revokation is also trivial. most sites does that very easily with API keys and cookies (e.g. google 2step auth, you can control which cookies are still valids for the 'remember this device' feature). the same logic could be used for client certs.


And managing multiple devices is actually a security boon. If your auth creds get stolen you'll know what specific device was compromised and can revoke the one vs a reset of all creds.

Also agree that malware is moot (it can steal auth tokens live as they're submitted, so even physical tokens requiring 2-factor pins get compromised).

The problem is browsers haven't been proactive in making the technology user friendly, so websites don't adopt it, so browsers don't make the technology user friendly. A couple large icons and good design decisions would make it as easy as logging into your screen saver.


That's a reason I use different sets of SSH keys from different hosts, or at least different classes of hosts.

It's rather troubling to find that many SSH management tools (including Puppet/Chef recipes) deal poorly with multiple keys per user.


It's more than icons and design.

Users. Do. Not. Understand. Certificates.

If the word "certificate" or "public key" or "private key" appears anywhere in the process, it's a non-starter. If they have to select a certificate from a list, look at a "fingerprint", or deal with any other jargon like "x509" or "certificate authority" or anything along those lines, it's dead in the water.


What I meant about malware is that client certificates can't provide additional security over passwords because malware can easily steal both.

So it's only hassle and no advantage, hence, useless


Other than passwords, client certs cannot be guessed or generated with wordlists, they can not be used to authenticate to other sites even if stolen, and they cannot be forgotten (at least I tend to forget passwords for rarely used sites way more often than I lose data through HW failure).

So I'd say there are some advantages over passwords. And most of the functionality is already there, the only thing missing is a browser function to generate a keypair for a domain and send the public key, without much user interaction.


That's already a thing, if I understand what you're asking - see the <keygen> tag.


Client certs can't be phished


The can be revealed by other means (trojan/virus on the client). But yes, the anti-phishing aspect is a good one.


password can fails all those plus phishing.


Not really. you could use a storage medium that is remote and load it in memory only, if the usage justified.... not case, for reasons you outlined eloquently on the article.

Also, the client cert, as with private keys, allows you to memorize ONLY ONE password (make it a passphrase, please) instead of a bazillion (or, gasp, reusing).

So, no, it's not an extra hassle. it's a solution for the real hassle of passwords in websites. if it was used.


I agree with your malware point. If your system is compromised, and attackers are past the 'airtight hatchway', then you're doomed anyway.


Certificates can also be marked as non-exportable.


Unless there's some trusted hardware involved, I doubt that effectively prevents malware from getting them.


The U.S. government is a very large user of PKI.

One is issued an identification card which is really a smart card ("common access card"[0]) with certificates on it and that is used for authentication.

PKI works but "it's too {hard|much trouble}" is a frequent excuse.

[0]: https://en.wikipedia.org/wiki/Common_Access_Card


Isn't the point of client certs that the private key should never leave the machine? I.e. when you want to get access from a Smartphone you just initiate a new certification process, some admin ticks off a new public key and you're set? For consumer faced apps I could see some initial authentication process where for every new public key the user gets an E-Mail in order to activate it. Using it this way I think there is also no reason to use the browser UI for importing / exporting certificates, at least not for the users themselves. Still, the question remains why no-one is using it.


This is why: I know one company & service which did. But after a year or two they gave up. It just complicated things, overloaded helpdesk and made customers angry. They used client certificates to count 'workstation licenses' for a web app.


Well, using it for licensing is of course harder to argue for than increasing the user's security. May I ask what were the specific problems in your scenario? I'm guessing users with multiple browsers? That's another reason why I hate Windows not having a central MS provided keyring. If MS had pushed for it soon after the POSIX community and Apple, passwordless authentification might be already a standard nowadays.


The UX on iPhone isn't so bad if you are using .mobileconfig's and there aren't conflicting certificates. I'm actually starting to open source some of my work in this space.

OSX will still prompt for the certificate even if there is only one matching, but again it isn't too bad.

Though in iOS 7 / Mavericks I'll probably switch to using Keberos, which will be seamlessly integrated into the Objective-C's networking APIs.


Persona synchronizes client private keys. Its not SSL Client certs but its not all that bad.

What I'd like is the same thing but be the owner of the private keys, stored on my external, physical token.

This works with SSL client certs, yes, but its a pita, and there is no trust from the client by the server if its never been seen before.

GnuPG actually works like that properly (with the MonkeySphere project for the web stuff), but its not used anywhere..


Adding to your third point I am pretty sure there is still awful mobile browser support for client certs which is a dealbreaker just about every site out there these days


Try having two client certificates and seeing the browser UX for selecting which one to present to which site. You'll see why they aren't used.

Client certificates should be much more popular in backend applications, where they're straightforward to use, flexible, and fairly trustworthy. But they're not a good end-user technology.


I agree that for machine to machine communication client certificates work well. Especially if you implement your own CA so you don't worry about the public infrastructure.


If you're writing your own code for both sides, an approach that I've seen work just as well is to keep whitelists of certificate fingerprints.


Not too bad on the Mac: prompts you to pick one when required. If you use more than one cert on a single site then you need to go into Keychain Access to delete the favoured cert for site thingy to make it prompt you each time. Move client certs to their own keychain to password protect them.


Speaking of backend applications, Puppet uses client certificates to authenticate machines that are being administrated by a Puppet server. I used to work on that code - none of us would have ever called it "straightforward". The library support for client certs is underdeveloped and poorly documented - probably because so few people are using client certs in production.


I feel like I've written a fair bit of client certificate code in Ruby and haven't had much of a problem. Where did you run into problems?


I wish I could remember the specific issues. It's been a few years now, and all I can recall about it was that feeling that we had a couple of experts in the open source community who could make it work, and the rest of us just shrugged and accepted their patches. I know that the bugs tended to cluster around the Certificate Revocation List, but I can't remember how they manifested.


Agreed if you configure your HTTPS server to accept certificates from any issuer. But if you restrict it to a smaller list of CAs, users are only prompted to choose among their certificates matching those CAs.


It's possible to self sign client certificates, right? If so, you could just limit the list to your own.



To your mind, is this more a matter of the particular UI that currently exists, or are the UX issues more fundamental to the technology?


You can imagine a UX that would abstract certificates and their generation into a notion of accounts or personas, but the binding of those accounts to actual websites would be tricky and might involve new protocol, which is something you don't want to be doing with TLS certificates.


The whole DOD is using it, but they're on crypto smart cards, so many folks don't realize that's what they're doing. Email, single sign on, web site auth, etc. Works great!


It's easy enough to use on a smart card, but try setting that up. I'm trivialising now, but you're looking at...

- Card manufacture

- Key handling

- Enrolment

- Card lifecycle

- Certificate lifecycle

- Identity synchronisation

You could buy a stack of white-labelled cards, of it you're the DoD you'd roll your own. That's shopping for silicon wafers, contact plate assemblies, mag stripes, holograms, RFID blanks, plastic card blanks, and card stack.

Regardless of DoD or not, you've got to manage secure key transfer (keys created at bureau, exported using GIS TX3 and multiple smartcards, sent to client via multiple motorcycles, key reassembled and imported into HSM).

Enrolment is a hefty process. Apply for smart card. Personalise physical card (typically photo and name), provision (link card to user), give card to user, mail PIN to user (out of band), and then activate card.

That's just card manufacture and provisioning. Doesn't cover provisioning and de-provisioning software and hardware, card lifecycle (suspend when lost, disable if stolen or not found or employment terminated, and so on).

Then there's CA integration, (often multiple) directory integration, and hardware integration.

The point of all this is that security can be easy, but it comes with a big fat price tag and some very specialised skill.


You're right, it's a ridiculously large organizational overhead. Not to mention the CA's become single-point-of-compromise for the whole PKI. Even after 10 years or so of the CAC, the DoD process is still not pain-free. They've finally moved to OCSP instead of CRLs, which helps (the CRL is so large now that it will bluescreen Windows on import to the registry, haha) some of the pain.

Even the DoD doesn't roll it's own cards and foundry stuff: they're standard Gemalto (and a few other provider) cards. You can order blanks yourself, I believe.

It's also a black art - VERY very few people seem to know how to 'boostrap' the system (enable PKI for a domain or web server, get new certs issued, etc.) even within the over-archign framework.

That said,my point still stands: millions of people (literally) use PKI for client SSL certs daily.


the CA's become single-point-of-compromise for the whole PKI. It's a bit more nuanced than that. A compromised CA can issue new certs which can spoof old one's identity, but it can't be used to create certs which will read traffic from an existing host. The big risk is that someone is subject to a MITM attack, doesn't clue in on the certificate change (their client should note this), and accepts the new key.

Signed / validated keys (used for major sites in some browsers) also help this somewhat.

Not that SSL/TLS isn't a huge mess.


A compromised CA in a client cert PKI world, like the DOD one I was talking about, is effectively an end-game scenario. Every machine in the org. would have to be updated (root CA's don't have CRLs, afaik). The big risk isn't a mitm attack - that's a risk with traditional server-side certs. The big risk is that all of a sudden there are unlimited legitimate users with no control over identities anymore. It's like getting write access to /etc/passwd.


Doesn't that depend on how you authorize the client certs themselves?

It's one thing to _issue_ a cert. It's another to _approve_ that issued cert.

If you're blindly accepting certs, you've got other issues.


Enrolment is a hefty process. Apply for smart card. Personalise physical card (typically photo and name), provision (link card to user), give card to user, mail PIN to user (out of band), and then activate card.

You missed the part where you drive several hours to the closest military base and wait in line several hours(think DMV and TSA all rolled into one).


If you're actually on the DoD network that's only necessary in the rarest of cases. Retirees have to drive awhile sometimes, that's true, but they are not getting CAC anyways, they're getting old-style ID cards.


In my experience that is the common case, work for a defense contractor not on, but near a military base. I very well could be wrong though.


I don't work for the DoD, but you have the process right as far as I've seen. Don't forget that they seem to want a new set of fingerprints every time you go in.


Yeah, it does suck to be CTR, I'll admit. Especially if you need separate CAC credentials for things like base access.


October is hell month :)


Try when you don't have prints. Took a long time to get my card issued.


Sounds like a business opportunity.


There are several companies in this space. I was a research lead for smartcard access project for iOS platforms. We had to perform two way handshake using the certificates on the smart card. The most difficult part was to get data signed by the private key on the card during the handshake process.


There are already companies in this space though.

One example is secmaker.com, but I don't know if they operate outside .se yet.


True, it works great for the average dod user (and in an enterprise in general) because they have a helpdesk that sets everything up for them. Once its configured it works fine, which is consistent with the message of this article.

Good luck to them if they want to install a different browser (non-IE) or do anything non-standard... if they even are allowed to do so.


For a while I was checking upcoming OS X releases for CAC compatibility. At least on OS X, it's surprisingly easy to get set up, with the hardest step finding a compatible card reader. Once you have that, you usually have to install a PKCS#11 module (usually CACKey for DoD). Once that's all done, it's presented as an additional Keychain for the rest of the system to use. I know Chrome and Safari work with it without further fiddling, and I'm pretty sure Firefox will as well.


Firefox did a lot of work to make CAC cards work for the DoD, they didn't develop a FIPS module just for the fun of it.


I've always liked the idea of smart cards, but I also wonder how I would use a smart card to simultaneously log in to a site from, say, my PC and phone. It seems like the card can only be plugged in one place.


Well your phone already has one smart card plugged into it all the time (assuming it's a GSM phone). Perhaps there's a clever way to leverage that as one factor in the authentication process.


That is, in fact, the point :) Using cards with your phone is a serious pain in the neck still, either way. You could always get an alternate token/card, so you have two you can use.


I would love to hear more about this. I've been looking for a good sso solution that uses crypto cards.


My understanding is that the private key is on a CAC card[1,2], which is issued to all DoD employees and contractors who need access to DoD resources. Readers are usually built into the keyboard, and I'm guessing that there's something similar to ssh-agent running to maintain the key until you're AFK.

--

[1] http://en.wikipedia.org/wiki/Common_Access_Card

[2] http://www.cac.mil/


As a DoD CAC user and developer, this is correct (although the card readers at my company are built into the laptop or inserted via a USB adapter). I don't think there is a background agent that maintains the key though, I just think the card reader is queried when the certificate is needed. After the cert has been verified, you're free to pull the card out.


The key never leaves the smartcard: the card hardware will not allow it to be retrieved. Rather, your system has drivers which ask the card to do all of the signing operations which your system would normally do if it had access to a key.


> After the cert has been verified, you're free to pull the card out.

Last time I checked DoD systems are configured to automatically log you out of your session upon removal of your CAC.


This is an unrelated security measure, but you are correct.

And if you pull the card out, you can't access anything anymore that requires it (and on a Windows domain that DOESNT lock / terminate the session, you'll be able to access things until your kerberos ticket expires or you need to get to something you don't have a ticket for).


I once volunteered at a hospital where they made use of a smart card security system - among other things, it would log you out if you removed your card from the reader. This closed all the programs you were running - and logging back in was a slow process.

Needless to say, the smart cards stayed in the users' PCs even when they weren't at their desks.


That's disappointing, Windows has multiple options to handle a smart card's removal, one of which is simply locking the screen, sad they decided to use the "force log off" setting instead.


I think this is a feature of the ActiveClient middleware. Also handles the PIN entry when you insert. Default windows behavior is to leave the certs in the cert store.


ADFS and UAG - the crypto card is tied to an account in AD, AD is configured to trust certs signed by a given CA, and the UAG software then uses that authenticated user to provide SSO.


Yes, it's true. Not sure why it's news, other than it's a new article?

This has been an abomination since .... the functionality was added. It's hard even for geeks to deal with it - it makes 0 sense for non-techies to even contemplate dealing with it.

Relatedly, browser UI for dealing with cookies has been abysmal since day 1 as well. Instead of making cookie information easily visisble and manageable, browser makers resorted to shifting cookie mgt stuff around in 'preferences' a few times (is it 'privacy'? or 'advanced'? why not 'cookies'?), and people champion using specific browsers with specific plugins as an optimal solution.

Client certs are even worse off - few people even bother to write plugins to deal with this stuff. It's chicken/egg as well - no point in updating browsers to be better if no servers will modify their code to deal with it.


>> This has been an abomination since .... the functionality was added. It's hard even for geeks to deal with it - it makes 0 sense for non-techies to even contemplate dealing with it.

Are you criticizing the concept or the implementation?

Conceptually, for a user to maintain 1 or 2 "identities" on a machine seems easier than maintaining 50 "logins", which already require a password manager if you're using reasonably hard passwords.


implementation. that's why it's not being used.


> Instead of making cookie information easily visisble and manageable

How do /you/ suggest making base64 encoded blobs of context-less ID numbers understandable to the average browser user, assuming that they could be persuaded to care in the first place?

Even as a geek who knows where to look, I look at the cookies from most sites and have no idea what they mean :P


doesn't particular matter if you know what they mean. A sidepanel that can open up at the click of an icon which tells me that 9 cookies were used, and they're set by google and yahoo and bing and 5 sites I've never heard of... that would give a lot of visibility immediately to people. Giving them an option of deleting/removing/blocking right from that sidepanel would empower people to be able to deal with it themselves without having to wade through scary and non-standard menus. Instead, we get a decade of ignoring this and then EU legislation that forces people to put stupid bars at the top ('we track you with cookies, click here to accept') but does pretty much nothing with respect to user education and empowerment.

We have 'home' and 'reload' and 'back' and 'forward' buttons. Actually, no, my safari doesn't even have a 'home' right now, but a 'share'. Why not a 'cookie' button.

Again, doesn't particularly matter whether you can decipher base64-encoded stuff or not - the average person has no clear way to even understand the number and type of cookie data being tracked, nor an easy way to block/unblock without wading through multiple levels of menus which take them away from the site they're on.

Doing this on smartphones would present a challenge, but is not impossible. Doing this is desktop browsers shouldn't be all that hard. We see the data in firebug - parsing out request headers and showing the cookie data (with block/delete/allow options) in a sidebar panel would not be hard. But it won't have any real impact unless it's something built-in to multiple browsers and we have an education period.

The late 90's was full of "ban cookies! reject cookies! they're evil!" hype, then we all forgot about it for a while, endured the rise of doubleclick and the like, and got legal solutions vs good UI solutions. :/

EDIT: I hit save too quick. I don't think this will ever happen - at least 2 major browser makers have a vested interest in keeping the ad-based tracking economy moving along - any built-in browser functionality which would interfere with that will never fly. I'm talking about Google and Firefox (which relies a lot on Google's wellbeing re: ad revenue). I imagine MS is in a similar boat, but maybe not to the same degree as Google. I can't speaker for Apple's potential for this, but given the webkit tie between chrome and safari, it's probably unlikely as well.


When I click on the https padlock on this site, chromium shows me 1 ycombinator cookie and 0 third-pary cookies. In 2 clicks I can block the cookie, if I so choose.


When we all start using SSL for everything, perhaps that's a way to go.

If it's so simple, why don't we have a cookie icon up there so that it's visible all the time though?

EDIT: it's that way for non-ssl sites too - thank you - I learned something new - I may have to switch to chrome as my primary browser again. :/

It's still not quite as easy as I'd hoped for, because it's mixing cookie data and other stuff in the same popup panel, but it's a start. Having it be a cookie icon so it would be easier to know what you'll get when you click it (perhaps a separate icon) would still be better.


Anyone know the story about where browsers actually are at with these UI bugs then?


I think this would need a total re-think before the masses could use.

I'd love to see browsers implement it by forcing people to store client certs on a USB-key or a phone by default. I think some kind of physical item that contains your keychain would be much more intuitive to many. Everyone is familiar with mechanical keys, they know not to leave them around, they know that they need them to unlock things and they know if they lose them they need to replace them.


The analogy to mechanical keys doesn't hold up under closer scrutiny, though, right? House keys don't serve as an identity, just an access mechanism. You don't lose all future access to your house if you lose your keys; a locksmith can independently verify (with reasonable certainty) that it's okay for you to obtain a new set of keys which give access to the same house.

I feel like the human predisposition to risk aversion is even more of a factor preventing adoption among average users than poor UX (not to mention lack of awareness). What do I tell my parents when they ask "What if my computer crashes? Would I not be able to log in to the website and see my stuff? What good is using a website if I can't access it from any computer?"

Until something as securely portable and loss-resistant as one's own memory is achieved, I don't see passwords being less popular than any other access mechanism for the average user, no matter how significant the other downsides.


The key management tools are awful. People complain about the complexity of PGP but at least there it's one-click to export a key, look at the details of it, sign someone else's key etc.

But yeah, it would be nice to use this tech rather than reinventing the wheel. The underlying implementation is sound.


The whole system, now close to 20 years old, needs a reboot. I know, that's easy for me to say. But it's showing its age, and could really benefit from a complete rethink.


Do you have specific criticisms? In cryptography, a system that has withstood scrutiny for that long is a rare and exceedingly valuable thing. By all means use it as the ugly foundations that you hide beneath the beautiful edifice on top, but throwing it away would be madness.


You're right, I'm being vague. I'm looking for two things, really:

1- simpler implementation — one that is easier for users to understand, and includes client certs by default

2- one with a re-engineered cryptographic implementation, one less likely to have the kind of numerous security flaws that have been uncovered in SSL/TLS over the years

SSL was originally meant to serve two purposes:

1- encrypt communication

2- verify, through a trusted 3rd party, that the remote service you're contacting is actually who it says it is

Most laymen, and many technologists, do not know that #2 even exists. Worse, this authentication portion has been all but destroyed by liberal certificate authorities like GoDaddy. From my experience, anyone can get a certificate for a domain without any kind of check to see if you have the right to use that domain. So, in theory, you could register "amaz0n.com" with GoDaddy, get a cert for it, and start using it, without any kind of background check. In the early days (when Verisign was the only CA in town), a business had to supply a Dunn & Bradstreet number and be subject to other background checking before being issued a CA-signed cert. If that sounds heavy-handed, it shouldn't: Verisign was supposed to have the users' backs. If you tried to get a cert for Amaz0n.com, it would have been rejected unless you could prove you actually are Amazon.com.

I think that kind of authentication has a real place on the modern Internet.

edit: formatting


>1- simpler implementation — one that is easier for users to understand, and includes client certs by default

I'm not sure what you're suggesting. A nicer UI would definitely be a good idea, but you can do that without changing the underlying crypto implementation.

>2- one with a re-engineered cryptographic implementation, one less likely to have the kind of numerous security flaws that have been uncovered in SSL/TLS over the years

That's not how you get secure cryptography. You need not just a secure algorithm but a secure implementation, one resistant to timing attacks, compression attacks, and all sorts of nonobvious things. OpenSSL is far from developer-friendly, but the vulnerabilities have been hammered out over those 20 years, and there is a body of knowledge on how to use it securely. A new implementation would have to go through that all over again.

>From my experience, anyone can get a certificate for a domain without any kind of check to see if you have the right to use that domain. So, in theory, you could register "amaz0n.com" with GoDaddy, get a cert for it, and start using it, without any kind of background check. In the early days (when Verisign was the only CA in town), a business had to supply a Dunn & Bradstreet number and be subject to other background checking before being issued a CA-signed cert. If that sounds heavy-handed, it shouldn't: Verisign was supposed to have the users' backs. If you tried to get a cert for Amaz0n.com, it would have been rejected unless you could prove you actually are Amazon.com.

True enough, but fixing that doesn't require any changes to SSL itself - you just have to curate the list of root certificates the browser trusts more carefully.


I've been playing around with DJB's NaCl (http://nacl.cr.yp.to/) and have found it to be a very easy API to use (if a bit quirky to install). Using this, I've come to the conclusion that the actual cryptography is trivial compared to key management ("trivial" in the meaning "it's done and proven, it should be easy to use," not in the "anybody can write their own encryption system").


Verisign still verifies - at least to some extent. Last time I got a cert I had to take my business registration stuff to a notary and get them to notarize a statement to the effect that I am who I say I am, etc.


I wouldn't say SSL "withstood scrutiny" after all. We now know that state actors routinely MIM popular services with rogue keys (Iran, Turkey, Egypt, China and Syria are the ones we know about), and are happy to steal keys to distribute malware (US/Israel with Stuxnet). It was fairly robust in the 90s when knowledge was scarce, but as malicious actors got more confident with the technology, it started to show the cracks.

Now with PRISM and total global surveillance becoming a sad reality, we could certainly do with more decentralized approaches and (ideally) overcoming the "authority" paradigm (but this will likely remain a dream).


Those are weaknesses in the signing authorities. Signing authorities are a layer of SSL, and a rather high one. It is possible to use SSL without them. I have an app I've written that uses SSL, and since it's essentially one thing we own talking to another thing we own (from a security perspective), there's no signing authority, no Verisign, no NSA, just, is this the exact SSL cert we issued to this SSL user or not?

Not that there aren't other problems at times with SSL, especially depending on how you use it, but your criticism may be more limited than you realize.


I think the strongest criticism is that nobody on the mainstream internet is using this stuff. I'm not sure what your parent meant by a "complete reboot", but my interpretation is that all of the user experience needs a reboot, not the underlying crytpo.


I use it for StartSSL. It allows me to use SSL client cert auth for their OpenID instance.[0] I use it to log into StackOverflow and a few prominent sites. I like the idea of my cert provider being a trusted party for auth for other sites. However, trusting anyone other than himself in this post-PRISM era is probably a mistake.

[0] http://www.startssl.com/?app=14


It's so annoying that OpenID providers either support client certs or using your own domain, but never both!


I have not tried both, so I did not experience this problem. I do know SSL client certs do not work well, and wanted to work on building my own solution that does what StartSSL (an OpenID endpoint with SSL-cert based authentication). Does anyone know of ways to do this?


Nginx supports client certs[1], so I'd say the easiest way to get started is to get some free OpenID server (SimpleID[2] seems maintaned) and configure nginx to require cert authentication to access the login path.

I'd do it myself, if I used OpenID more than once every two months or so.

[1]: http://nategood.com/client-side-certificate-authentication-i...

[2]: http://simpleid.koinic.net/


Same goes for S/MIME which uses X.509, too. But S/MIME recently got traction and usability again because of OSX Mail.app and iOS mail support.

People should really consider S/MIME for mail encryption as nearly every MUA (mail client) can deal with it and large institutions already use it (e.g. for SSO)

https://gist.github.com/rmoriz/5945400


MIT uses it, https://ca.mit.edu/, but its a constant source of helpdesk headaches. Managing renewals and revocation is hard.


We are using client-side SSL certificates extensively for API as well as browser-based single-sign-on for 20+ employees on a smattering of Mac/Linux/iPhone/Android devices. Definitely some ramp-up and wonkiness, but it's working well.

Also worth noting that infrastructure components like Cassandra [1] and RabbitMQ [2] leverage PKI as well.

Checkout our Jenkins client-side SSL cert auth plugin: https://github.com/pantheon-systems/certificate-authenticati...

[1] http://www.datastax.com/documentation/cassandra/1.2/index.ht... [2] http://www.rabbitmq.com/ssl.html


It's worth noting that MIT uses these. New students are provisioned client certs, which are accepted for login to institutional websites and even Apple's MIT student discount page. Not sure if any other universities are using this.


Yes. But, it's worth adding that MIT also uses a Shibboleth implementation (locally called Touchstone). On many core sites Touchstone handles authentication to the specific web servers. Touchstone itself can be authenticated to with either the individual X509 client cert or their password. Many people use password authentication to with the Touchstone server even though MIT has a website that (usually successfully)auto generates and installs the certificate in the user's browser.


Any idea how many people primarily use passwords? Personally, I've always found it really convenient to use certs for MIT and CSAIL stuff.


That's just installing it on a desktop. Try installing an SSL certificate on a mobile browser.


Both iOS and Android allow you to easily provision keys to a device...


Thanks for that. In case anyone else is interested: https://support.google.com/nexus/4/answer/2844832?hl=en


SCEP via .mobileconfig on iPhone, it's a breeze; I use client side certificates in a lot of places and the UX is transparent so long as the device doesn't contain multiple certificates.


Much easier on a smartphone managed with MDM. Some of the MDM platforms even include a pre-configured CA.


I've actually started open-sourcing some of my work in this space (MDM vendor here) https://github.com/AppBlade my CA/SCEP implementation for iOS devices is part of TestHub [WIP]. These protocols also work for OSX boxes and you don't need full MDM to get most of the benefits, so long as you accept the .mobileconfig prompt.


Correct me if I'm wrong, but unless they can be password protected (which defeats the object...) aren't they less secure than passwords in practice?

I have a password on my phone, because I don't want people with access to it to be able to login and look at my stuff. What's to stop my friend Joe Blogs coming over my house and being able to read my email because I have one of these things installed that allows for a one-click login?


Correct me if I'm wrong, but unless they can be password protected (which defeats the object...) aren't they less secure than passwords in practice?

The most important difference is that attackers can't get it from the server and re-use it on others, since the server only needs to see the public key of the cert. Attackers can no longer break into a single server and get thousands of badly-secured passwords.

Of course, secure password hashing mitigates this issue, but that means we need to trust each and every server out there to implement that correctly (not likely), while this only requires a correct implementation from the browsers.


> What's to stop my friend Joe Blogs coming over my house and being able to read my email ...

The PIN.

X.509 certificates are protected by a PIN in the same way that an SSH key pair is protected by a passphrase. Even if you "physically" obtain the private key, you still need the PIN that protects it before you can use it.

(side note: you can use an X.509 certificate for SSH authentication)


I don't know what browsers do, but it's fairly easy to password protect these keys.

If you then keep the key accessible until some timeout (whether maximum or idle time) occurs then you've got pretty good security.

That's what you already get if you use LastPass e.a.


I see Persona as the next generation of client certs with usable UX. People are working on this problem, just not within SSL/TLS this time.


Sure, if you like your password going through Mozilla's server unencrypted.


Then don't use Mozilla's server.

(There's a tough choice here: Persona's crutches make it much easier to deploy incrementally, but there's no incentive to ever get off the crutches and people think that the crutches are Persona.)


BrowserID requires nothing from Mozilla. They of course offer a public server so that you don't have to run your own server if you don't want to, but it's entirely, 100% optional.


My suggestion for fixing the client cert problem is to task the browser with certificate generation upon profile creation. There are significant security implications of course, but sites aren't really incentivized to support client certificates.

If 50% of my traffic already tried offering me a client cert the decision to allow them would be an easy one to make.


The reason I would like a client certificate solution that worked (even if it was a malware target) is that I could reject all web traffic to my applications if they came in without a cert. It would decrease my public footprint dramatically.

I don't mind approving a user once per device. We've got to set them up anyway.


Use client certs then? Why do you feel current solutions don't work?


Setup is still too painful even with direct support.


I find it hard to believe that client certs offer much advantage over cookies for most normal web sites.

Even if it was common practice you would still need some way to recover an account after loosing a certificate, at which point an email will be sent out with a password equivalent reset token, so why not just use a password?

Storing certs client side just creates another target for malware, even if the certificate is password protected. You could move the certificate to a key fob, but at that point why not just use a separate second factor token? You would either have a full sense of security, or have to trust that the client machine is fully secure (impossible).

It is much more sense to focus on making cookies as secure as possible by setting secure headers, and invalidating cookies that come from a machine that is different than expected.


an email will be sent out with a password equivalent reset token

This is not the only way to do resets and is arguably now the weakest link in authentication (just ask Matt Honan). Sometimes I imagine a world where authentication enrollment and resets are done in person.


Agreed. Although sending an SMS goes a long way to solving this.


CERN (and other research facilities) use these a lot, to manage access to their internal websites and to the grid (the distributed computing platform). IIRC, you have a cern account, and generate a short-term certificate with which you work. You can use it to submit jobs into the grid, but you can also import it into the browser to access e.g. grid monitoring software, or internal wikis. The idea is that in the worst case that you lose the certificate to a hacker, it can be revoked quickly without having to block your whole account.

The last time I checked (haven't worked with CERN's computers or the grid for a while), it was pretty user-unfriendly. Obscure shell scripts for everything. It hopefully improved since then. OTOH, it's probably good enough for programmers and researchers.


Buddy, why is nobody using SSL certs anywhere AT ALL?? Plain text passwords are sent on the login forms from sites like Hackathon.io - sites used where hundreds of people meet in a room on the same damn wifi network. RIDICULOUS!! what the hell!


I've seen client-side certificates work in large organizations (e.g. MIT). However, this was in the pre-smartphone era. They just had to support IE, Safari, and Firefox. That was still a pain.

Today, it's much harder to support every browser and OS.


Yep -- too hard to setup.

A really good browser enhancement would be 1-click client side certificate setup.


Pretty sure that you could write an Firefox addon for this.


This is the kind of tech Bump needs to get on -- auth and identity management. They showed a way a while back to upload an image by "bumping" your phone on your keyboard. Since phones are singular and ubiquitous and have their own identifiers, your phone could essentially become a makeshift physical key. You could log into any machine, public or private, with a simple "bump" of the keyboard. So Bump, if you're listening, create a Bump Auth.

Of course if you lost your phone or someone stole it, that would be problematic, but I don't anticipate you'd use this as a way to log into your bank account.


I always wished we could use SSL the way we use phisycal keys. They are a very good example encapsulization and easy user interface. When you use the key for your house, you look for a certian color or shape of the key. Most users don't know or care how the lock works. Why can't SSL be this way. Instead of having the end user chose a password have the browser automaticaly generate a public/private pair. For the user interface make a key with a random or user generated design. When the user goes to the website again present the user with a list of their keys and have them chose the correct one.


The first part, moving authentication from the user to the browser, is exactly what Mozilla's Persona is about. However, the whole "multiple device" thing kind of kills browser-level authentication (persona gets around this by allowing email accounts).


Using the TPM to store the client certificates would prevent malware from stealing them. I have no experience on this, but quick search revealed something[1].

Maybe one use for securely stored client side certs would to mark the computer as trusted. For example now Google is probably using cookies to determine that my desktop is trusted and thus I don't need the two-factor authentication to log in. TPM and client side certs could provide more secure alternative for this.

[1] http://blog.habets.pp.se/2012/02/TPM-backed-SSL


I used SSL client certificates with a Lotus system from 2000 to ~2011. They're still in use today, I just no longer work there.

It's pretty easy maintenance once you set up systems for account managers to generate and issue new certificates for people.

Then again, this was small time. Few thousand active users. Niche bio/pharma web application.

Large scale it could very well be a bitch and/or unnecessary at this point in time.

The system was designed/created between '96 and '98 and has used certificates the entire time.



We use client-side PKI and certs, not just with employees, but with customers, at OpenDNS.

Works well, is a very strong added factor, and is easy to manage and deploy these days.


What tool(s) do you use to manage your CA? Overall, this is one of the bigger challenges... I don't know of any open source project (or even SaaS) that implements a full CA workflow including signup, approval, revocation, etc.


While the UX side of things could be made quite simpler, but I think a significant hurdle for client certificates is authentication through proxies.


Client certificates won't work well if you are trying to authenticated against services inside the corporate network (sharepoint, microsoft exchange, AD) and your connection passes through proxies such as Microsoft TMG. Your SSL connection will be terminated at such a proxy and you will have to setup kerberos or impersonation. It is doable just tricky to setup.


We use these at work, and once set up they're a breeze. But that's a big "once".

It also makes life needlessly difficult if you want to log in to something from a new/borrowed/etc PC and don't have the keys handy. Where needlessly = near-impossible.


I haven't been able to get a client certificate working on Android at all (stock 4.2.2 with Chrome and Chrome Beta). Perhaps someone here has gotten them to work? I've posted in several forums and haven't found anyone that's gotten them to work.


It definitely works with recent Chrome versions. I use this daily to check my professional webmail on an Android 4.2.2. Once you have provisioned a PKCS#12 into your keystore, open Chrome onto an HTTPS web site that requires certificates from your issuer and the certificate choice dialog pops up.


They're used extensively in a lot of industry- just because it isn't used for logging onto Gmail doesn't mean it isn't used elsewhere. I work in utilities and client certificates are used for all or nearly all web clients and APIs.


I think the main problem is the effort of having to maintain a client certificate. I.e, backing it up and copying it on to other devices. It's so much easier to just remember a password, or write it down if you can't do that.


I'm surprised no one has mentioned WebID yet:

> http://webid.info/

WebID works (I saw it demoed at a meetup earlier this year), though I'm not sure how popular — or unpopular — it is.


The guys at CryoKey (https://www.cryokey.com/) are basically trying to do this. I haven't personally used it, though.


Had to deal with this once or twice for some integrations. It's a PITA for little additional security. Often you can just use a VPN or IP restriction instead.


> basic authentication (nobody uses this one)

Huh? I use that all the time.


Anyone know of a good tutorial for using client certificates for (2nd factor) authentication? A quick Google search brings up nothing useful.


I would think getting your cert signed by a CA is a significant barrier to entry also.


I love the graphics... The user responses to the settings screens are hilarious!


Short answer: it's a pain in the butt.


Im at a loss, but my first instinct is to say that server certs arent validated properly AT ALL, so I fail to see how client certs would do any better.

For all the hype over PFS (perfect forward secrecy) I dont see how how MITM attacks are stopped because cert validation is so bad or nonexistent I dont see applying more certs (plus diffie hellman) to be a solution.


They're as secure as your ability to keep the private keys private, just like with server certs.

As far as MITM and PFS goes; that's handled just the same as regular SSL. Using a client cert doesn't affect that at all.


Which is not secure at all. you can MITM a typical SSL connection in so many ways SSL might as well not exist.

No real cert validation, forged certs, proxy replays. SSL is a joke.


You can't MITM an SSL session with validly CA-signed certs unless you've pwnd the CA, web server or end user's machine. And I don't know what you're going on about with regarding "no real cert validation". If it's valid, it will be validated. There's nothing wrong with it when it's done right.


Most places where you can authenticate with SSL client certs allow you to add your own self-signed certificate and authenticate using that. All the validation you need is to check wether the cert is in the user's list. You can only forge that by stealing the private key.

There's really no reason to only allow CA signed client certs.


Proxy replays? How so?

As for cert validation / forged certs, they're only problematic because we want to authenticate a server we have never talked to before. With clients certs, the same doesn't apply: the server just needs to ensure the client is the same as the one who registered the account, so there's no need for the whole CA enchilada.


The point isn't eavesdropping prevention, it's authentication that's not broken (like passwords).


No or insignificant economic incentives.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: