Hacker News new | comments | show | ask | jobs | submit login
Launching in 2015: A Certificate Authority to Encrypt the Entire Web (eff.org)
2019 points by mariusz79 on Nov 18, 2014 | hide | past | web | favorite | 461 comments



This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.

The certificate authorities are quite in love that the self-signed certificate errors are turning redder, bolder, and bigger. A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool".

But so what if he's cool? Yeah I like my banking website to be "cool" but for 200 bucks I can be just as "cool". A few years back the browsers started putting extra bling on the URL bar if the coolness factor was high enough - if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge. And they should, that means someone had to fax over vials of blood with the governor's seal that it's a legitimate institute in that state or province. But my little 200 dollar, not pulsating but still green certificate means "yeah digitalsushi definitely had 200 bucks and a fax machine, or at least was hostmaster@digitalsushi.com for damned sure".

And that is good enough for users. No errors? It's legit.

What's the difference between me coughing up 200 bucks to make that URL bar green, and then bright red with klaxons cause I didn't cough up the 200 bucks to be sure I am the owner of a personal domain? Like I said, a racket. The certificate authorities love causing a panic. But don't tell me users are any safer just 'cause I had 200 bucks. They're not.

The cert is just for warm and fuzzies. The encryption is to keep snoops out. If I made a browser, I would have 200 dollar "hostmaster" verification be some orange, cautious URL bar - "this person has a site that we have verified to the laziest extent possible without getting sued for not even doing anything at all". But then I probably wouldn't be getting any tips in my jar from the CAs at the end of the day.


> A self signed certificate warning means "Warning! The admin on the site you're connecting to wants this conversation to be private but it hasn't been proven that he has 200 bucks for us to say he's cool"

no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".

Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private.

Unfortunately, there is no way to guarantee that to be true if the identity of the server certificate can't somehow be tied to the identity of the server.

So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.

But the moment you accept the connection over ssl, the user agent assumes the connection to be intended to be private and failure to assert identity becomes a terminal issue.

This doesn't mean that the CA way of doing things is the right way - far from it. It's just the best that we currently have.

The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.


The solution is something nobody hasn't quite come up with.

SSH has. It tells me:

WARNING, You are connecting to this site (fi:ng:er:pr:in:t) for the first time. Do your homework now. IF you deem it trustworthy right now then I will never bother you again UNLESS someone tries to impersonate it in the future.

That model isn't perfect either but it is much preferable over the model that we currently have, which is: Blindly trust everyone who manages to exert control over any one of the 200+ "Certificate Authorities" that someone chose to bake into my browser.


...and then if the fingerprint changes, you get something like this:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

@@@ WARNING! THIS ADDRESS MAY BE DOING SOMETHING NASTY!! @@@

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@


... and then you do rm .ssh/known_hosts and try again :P


ssh-keygen -f ~/.ssh/known_hosts -R 123.45.67.89


If you get past the terrifying warning, it even gives you the command to copy and paste. You don't even have to type it!


You have probably just saved me.. minutes of time over the course of a year!

But seriously thanks, I was going into the known_hosts file and manually deleting the offending line :)


> SSH has.

IMHO no. We don't SSH to the same 46 servers everyday. But we do log into that many (or more) websites. Can you imagine the amount of homework users need to do in order for this to work?

Not to mention the amount of non-tech savvy users who just won't put up with it.


Quite the contrary: SSH's system means that you only have to "do your homework" when first connecting to the server. It seems I have 64 lines in my ~/.ssh/known_hosts (there are probably quite a few duplicates, because this seems high to me) and almost never have SSH tell me the key has changed and someone could be doing something nasty. When it does, I almost always know why, and when I don't then I try to contact the admin before connecting.

The way certificate authorities work though, you might visit your bank's "secure" website everyday, with its green padlock and company name displayed, but if one day a rogue authority or a compromised one issues a certificate to someone else, and your DNS resolves to a new server, your browser would not even tell you anything has changed and would happily display the green padlock like it always has.

In the current state of things, you have to do the homework yourself for every site you visit when using HTTPS, while you don't with SSH.


Or you can install Certificate Patrol (https://addons.mozilla.org/fr/firefox/addon/certificate-patr...). And then you'll cry at the amount of sites badly configured.


My browser also offers me to accept any self-signed certificate, I can investigate it and then I can accept it and it won't ever bother me again, until the certificate changes.

The problem is that this is a huge hassle for incidental visitors. Whereas SSH does not have incidental visitors. Same goes for email, if it's your own server, you know the cert to be the real one, and you can accept it, you're not bothered again.


Certificate Patrol can give you something like this for Firefox.


+1 for Certificate Patrol; used to use it until it got too annoying for me. Same with RequestPolicy; another great extension that is unfortunately a lot of work if you surf a lot, esp these days, when everything is hosting assets on CDNs.


I used to use EFF's SSL Observatory until I realized it spits out lots of extra http requests. X509 is inherently flawed/complex and adding a browser plugin to make it better feels wrong.


So this is where we stand:

    Encrypted (Certified)    COOL GREEN
    Encrypted (Self-Signed)  EVIL RED
    Unencrypted              NOTHING / NEUTRAL CHROME
I think there's a pretty blatant antipattern here, and I'm not talking about colourblind-proofing the browser chrome.


> Encrypted (Certified) COOL GREEN

I think we can agree that this case is correct. If you have a properly vetted cert, more power to you. The browser should tell your users that you do own this domain.

> Encrypted (Self-Signed) EVIL RED

Not quite. Your user does have the ability to permanently trust this certificate. However, if I am trying to access gmail.com over HTTPS, I better not get this error. Otherwise, I know for a fact someone is messing with me.

> Unencrypted NOTHING / NEUTRAL CHROME

This case should be eliminated. We need to stop publishing stuff over HTTP. Period. The browsers should start fast tracking dropping support for HTTP altogether so we don't even have to think about this case.

Now the solution for case #2 is that every time you buy a domain, your registrar should issue you a wildcard cert for that domain. Moreover, you should be able to use that private key + cert to sign additional certs for individual subdomains. That way we can eliminate all the CA's. We would essentially use the same infrastructure that already supports domain name registration and DNS instead of funding a completely parallel, yet deeply flawed CA industry. As a bonus, this way only your registrar and you may issue certs for your domain.

This is all castles in the sky, but IMO that's the correct solution.


>> Encrypted (Certified) COOL GREEN

> I think we can agree that this case is correct. If you have a properly vetted cert, more power to you. The browser should tell your users that you do own this domain.

Maybe. I just checked my browser and it already trusts more than 100 certificate authorities from all around the world, including some companies that I don't trust, some governments that I don't trust, but mainly composed of organisations I've never heard of. Even in a good system, there would occasionally be leaks etc, but this mess of promiscuous trust is clearly insane.


The problem here is that getting a vetted cert - or worse, compromising the authority that vets those certs is relatively trivial for a nation state, or even someone that's morally compromised enough to say, kidnap the CA Director's family. The fact is, trust is easily compromised and the current infrastructure needs to be hardened against that.

Even if the browser only had a single authority you do trust... how easy would it be for someone to force them to do something to compromise your trust? For instance with an NSL bound with a gag order?


> Even if the browser only had a single authority you do trust... how easy would it be for someone to force them to do something to compromise your trust? For instance with an NSL bound with a gag order?

By having several authorities you do trust? Preferably in different jurisdictions and parts of the world. But only those who you do trust.


There's no such thing in X509 as a cert which is authorized only to sign certs within a certain subdomain. A CA is either trusted or not; if it's trusted, it can sign off on a cert for www.google.com.

A system where there's a .com root cert that can sign authority certs for .com subdomains, which themselves can only sign for their own subdomains - that's a great idea. Not part of the standard, though.


There is such a thing -- name constraints. It allows exactly what you describe, limiting the valid names for certificates signed by the certificate.



Interesting - that's news to me, and does allow a domain-registry-based hierarchy. I guess there's the old revocation-check problem, though - when someone transfers a domain or it expires, you'd need to be able to revoke the authority cert. Potentially leads to a lot of revocation checks to validate a cert chain correctly...


You mention that the revocation-check problem is old, which is certainly true, but I think you allude to the possibility that a domain-registry-based hierarchy will exacerbate that problem in the form of an increase in revocation checks. I'm not sure that would be the case; it should be about the same. What difference does it make if I owned a domain, got a cert from a CA, and stopped owning the domain -- vs -- got that cert from my registrar? If anything this helps the process, because my registrar knows when I stop owning the domain whereas a CA has no clue and relies on the cert's expiration date exclusively.


I guess you're right - I was considering the fact that someone once owned a domain was a threat, but it is already.

But with a delegated chain of certs, the problem does get worse - not least because you'd require individual domains to manage their own certificate revocation.

But since there's basically no secure way to obtain CRLs or perform OCSP cert validation, it's kind of moot.


I think this is kind of backwards? I.e. a CA that implements name constraints for one of its sub-CAs does limit the certs that sub-CA may sign. However, name constraints do not allow one to say "for this domain, only this sub-CA may sign certs", which is more what I feel we're looking for here?


> There's no such thing in X509 as a cert which is authorized only to sign certs within a certain subdomain. A CA is either trusted or not; if it's trusted, it can sign off on a cert for www.google.com.

As currently implemented this is mostly correct. I don't think the CAs want that situation to change, but it really harms the usability of the entire system.


This is the problem that pinning is intended to solve.


>We need to stop publishing stuff over HTTP. Period.

This is a short sighted solution. If you go this route, then you are constraining authentication to the client. Users always choose bad passwords, so we are stuck.

In mobile networks, you have the network in a position to strongly authenticate the subscriber, without necessitating the weaknesses that can come with bad passwords.

I generally agree that TLS is desirable, but if we go all in, there are interesting and potentially more desirable alternatives that are lost.


FWIW, this is the route we are already going with HTTP/2: as implemented SPDY pretty much requires encryption.

Also, while mobile networks can authenticate my mobile phone and the hops from my phone to their edge router can be "trusted" (don't forget that the NSA is snooping here), I want end to end encryption. I want to know that the only two entities able to send/receive data are the site I'm trying to talk to and myself.

Let's think about it this way: in 2014 I propose a new protocol and implementation where you run a program on your device and I push arbitrary code to it. I also include code from advertisers, partners, third party affiliates, and my buddy Dave. All of this is done over clear text with no authentication, no authorization, no proof of identity or ownership, and over unsecured networks. Here's the link to the installer :) Yeah, I wouldn't sign up for that either.


I understand your argument. Barring some of the hyperbole of your worst case scenario, I totally get it.

In my opinion the rationality of your perspective is one of the most damaging consequences of the NSA's behavior.

Attacking the client is easy for both hackers and nation states. Moving the control to infrastructure tends to cut out whole swaths of script kiddies. There are important scenarios where this makes a ton of sense (m2m, iot, many mobile apps) and those assholes have just burned everyone's trust to the point that nascent solutions are no longer viable.


I am not quite sure what you are saying. Is it that it is in fact better to allow HTTP to exist vs providing HTTPS backed by some type of trusted infrastructure? Or is it that you are saying that we can build a brand new from scratch solution and need to fix the existing system somehow?


It's better to allow http to exist.

There is an opportunity for new authentication approaches that can't exist in a TLS-everywhere world.

I'm looking at http://en.wikipedia.org/wiki/Generic_Bootstrapping_Architect... in particular.


> This case should be eliminated. We need to stop publishing stuff over HTTP. Period.

HTTP is perfectly fine for information originating on and never leaving controlled, trusted, internal networks, and there is no reason to pay the overhead for HTTPS for those cases.

There's other use cases where its probably not worth the (small) overhead for HTTPS.


No it is not. I have talked about this on here before, but I don't mind repeating myself:

- Your small blog you publish over HTTP is now opening the door for me, the attacker to mess with any traffic originating from your site. Say you host your resume on your site. I can substitute it with a much less flattering version. Say you host a code snippet. I can add a little obfuscated fork bomb or root kit at the end. Say you have a referral link to Amazon. I send your users to amazom.com, a site that's MitM'ing amazon.com but captures credit card details on the payment form.

- Your internal corporate system is great an all, until you have an unrelated breach and the HTTP site becomes a vector for me to attack your systems. Or worse yet, I learn how to trick your users into believing they are accessing your genuine document store when in fact they are uploading their secret company plans to my very own rogue site. Trust inside the electrified fence is different than on the Internet, but a self-signed cert that your IT sends to every employee is also pretty easy. Conversely, if your organization is so large that it's impractical, just buy a $10 domain and a $8 TLS cert. The "overhead" you speak of does not exist when your server side stops supporting HTTP. FFS, configuring nginx to use TLS/HTTPS takes exactly 3 additional lines of configuration code as compared to an HTTP-only site.


> I can substitute/add/send...

Only if you control any of the infrastructure. If you do, then you can make my life a misery anyway, encrypted or not.


Authenticated and encrypted? That throws a wrench into things.


The authentication provided by the extant PKI system is much weaker than the encryption provided. Any CA can do anything it wants, and browsers trust lots of them.


What's worse is that there has already been at least one case of an ISP rewriting links to Amazon for all of their customers. https://news.ycombinator.com/item?id=6992897


There's also significant overhead to the community at large in having both HTTP and HTTPS be reasonable systems to use, and requiring that HTTP not show loud warnings. There's also a risk to your organization that you're teaching users that some HTTP sites are reasonable, which is a hard judgment for them to make. I can put up an external website which claims to be internal, and probably get some passwords or confidential information that way.

If you use HTTPS everywhere, there is a tiny bit (but usually negligible) runtime overhead, a bit of process overhead (which this announcement is pushing much closer to zero), and significant simplicity in many other axes. I think the tradeoff leans towards publishing internal sites with globally-valid HTTPS certificates.


The registrar issuing cert solution would certainly speed up HTTPS adoption; you're dealing with one less org to secure your site. The down-side is that if you decide to move registrars, that still complicates things. What if the new registrar refuses to issue a new cert without a hefty fee? Or what about revoking the previous cert? Now the registrar is functioning as a de facto CA so it doesn't completely eliminate the middle-man factor.

I'm hoping the EFF project will smooth over these hiccups, which is why I'm looking forward to it.


> The down-side is that if you decide to move registrars, that still complicates things. What if the new registrar refuses to issue a new cert without a hefty fee?

Then everyone stops using that registrar and they go out of business.

> Or what about revoking the previous cert?

You're asking this as if there is some kind of functioning method of revoking certificates already. If anything this makes it easier because it could be plausible for clients to somehow retrieve who the registrar is for the domain and then only accept certificates signed by that registrar.


If the popularity of GoDaddy has taught me anything, it's that people use what they know; not what's good. The list of companies that should have gone out of business is as long as the number of years since commerce began.

The fact that they still stay means (and this is relevant to the EFF project as well), creating alternatives is just as hard as making enough people know and care about them.

The registrar check per domain is probably the biggest plus in having it act as CA. Of course, that adds overhead to the registrar which they may not be willing to accept (margins and all that).


I'm not sure you should completely cut off anyone else but your registrar from holding the power to grant you certs.

As long as you can transfer the domain out I guess it's not too bad.


What does it matter who issues your cert if your registrar controls your domain name? They can transfer your domain name to the FBI, your competitor, your ex-husband, whoever. They can keep it for themselves, and they can publish their own DNS servers as authoritative, making all traffic flow through them anyways. They already are in 100% control of your domain and you are at their mercy. You already trust them enough to buy the domain from them. Why would you want to give a third party that same level of access when you don't have to? The CA's would have you believe that they have tighter security than anyone else, so you should trust them. That's silly. Your registrar has more control over your domain than your CA, so either their security has to be just as good, or you are screwed anyways.


This article[0] is largely about DNSSEC and DANE but it might give you some insights why making registrars the sole authorities isn't such a good idea.

[0] http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authe...


I may be dense but it seems to me that your registrar is still the trusted entity no matter what:

- they sell you the domain name. Doesn't matter how you try to authenticate yourself to clients (cert pinning aside), the registrar can seize the domain at any point.

- they control what your authoritative name servers are. They could easily change these on you.

- they populate the whois database, which is used when you purchase your TLS certs. This means that a registrar can list joe@fbi.gov as you the contact, and have Joe get a completely valid cert.

- one important issue that the article does not mention is that you are forever locked into trusting the site operator. This means that you as a user already must trust another entity.

This, what I am proposing is that out of the current trust list: [site owner, registrar. CA] we cut out the CA. Once again, the registrar always trumps the CA in their ability to seize your domain. At the same time, the CA provides zero protection against the registrar misbehaving. This article talks about shifting trust from the CA to the registrar and how that's bad. I posit that you already trust the registrar, forever (or as long as you are willing to use their TLD) so you would be strictly reducing the amount of entities you need to trust, never adding new ones.


To know something is insecure can be acceptable. To think something is secure when it isn't can be far more dangerous. I'm considering secure to mean encrypted and identity reasonably verified. Whatever your thoughts on the CA process it serves a purpose.

There are plenty of other things to complain about. EV for one.


It's actually more like this:

    Encrypted (Certified)    EVERYTHING'S FINE
    Encrypted (Self-Signed)  OMG!!!
    Unencrypted              EVERYTHING'S FINE (while it's not)


Authentication and encryption are fundamentally separate ideas, and the problem here is that the CA system mixes them together, when an optimal solution (read: encryption everywhere) would be to tackle them separately.

    Encrypted (Certified)    AUTHENTICATED & ENCRYPTED
    Encrypted (Self-Signed)  NOT AUTHENTICATED & ENCRYPTED
    Unencrypted              NOT AUTHENTICATED & NOT ENCRYPTED
Doing financial work or communicating with friends/coworkers? Make sure you're connection is authenticated and encrypted.

Connecting to a blog? Encryption is a plus (and is the topic of this very HN post). But unencrypted is also okay.

The original CA system was not designed to defend against mass surveillance so it had little incentive to separate these concerns.


It's definitely an antipattern. It's hard to solve until we get HTTPS deployable everywhere, because the first browser to defect from this antipattern will lose all its users, so it's extremely important to push on HTTPS being deployable and deployed everywhere.


> It's just the best that we currently have.

No, I wouldn't say so. Having SSL is better than having nothing pretty much on any site. But if you don't want to pay $200 somebody for nothing, you would probably consider using http by default on your site, because it just looks "safer" to the user that knows nothing about cryptography because of how browsers behave. Which is nonsense. It's worse than nothing.

And CA are not "authorities" at all. They could lie to you, they could be compromised. Of course, the fact that this certificate has been confirmed by "somebody" makes it a little more reliable than if it never was confirmed by anyone at all, but these "somebodies", CA, don't have any control over the situation, it's just some guys that came up with idea to make money like that early enough. You are as good CA as Symantec is, you can just start selling certificates and it would be the same — except, well, you are just some guy, so browsers wouldn't accept these certificates so it's worth nothing. It's all just about trust, and I'm not sure I trust Symantec more than I trust you. (And I don't mean I actually trust you, by the way.)

For everyone else it's not really about SSL, security and CAs, it's just about how popular browsers behave.

So, no, monopolies existing for the reason they are allowed to do something are never good. Only if they do it for free.


> And CA are not "authorities" at all. They could lie to you, they could be compromised.

Actually just read their terms of service, which may as well be summarised as "we issue certificates for entertainment purposes only".


There's no question in my mind that the whole thing is a racket and militates against security (you generally don't even know all the evil organisations that your browser implicitly trusts - and all the organisations that they trust etc).

There are certainly other options too: here's my suggestion-

The first time you go to a site where the certificate is one you haven't seen before, the browser should show a nice friendly page that doesn't make a fuss about how dangerous it is, and shows a fingerprint image for the site that you can verify elsewhere, either from a mail you've been sent, and with a list of images from fingerprint servers it knows about that contain a record for that site shown next to it.

Once you accept, it should store that certificate and allow you access to that site without making a big fuss or making it look like it's less secure than an unencrypted site. This should be a relatively normal flow and we should make the user experience accessible to normal people.

It's basically what we do for ssh connections to new hosts.


The SSH approach is exactly what I was thinking of, where you know the fingerprint of the other side you're connecting to.

I believe verification should be done out-of-band, using some other way (e.g. advertising) to transmit the fingerprint to the users. I've used self-signed certificates to collaborate over HTTPS with people I know in real life, and all I do is give them little pieces of paper with my cert printed on them.


With SSH you usually own both endpoints (or at least trusting your cloud provider).

The example you give with regards to exchanging a piece of paper is very similar. It's ridiculously hard to do such a thing on large scale without trusting intermediaries.

I'm putting my eggs on certificate pinning.


You're (almost) describing certificate pinning. Have a look at http://tack.io although it appears down for the moment. Here is the HN thread: https://news.ycombinator.com/item?id=4010711


How would you rotate keys with that scheme?


You'd need a strong root key and subkeys that rotate underneath. To change the root key would require signing by the original root and a new message to appear for confirmation.

All this plus something like a notary system to double check all your trusted root keys, would be much better than the hierarchical CA system we have.


Which root keys? The ones you store on your web server, which just got compromised?


Why would one store them there? Why not just use them to sign other keys that are actually used in online systems?


No, the question is what to do when you need to rotate them. Because that need will arise somewhere, globally, if we were to run the secure web on trust-on-first-use.

It's not interesting why someone hypothetically did get their root keys compromised, it's interesting how the proposed system would cope with it.

(Downvoting the question is not really a web scale way to build a global trust system.)


"Treating this as a grave error IMHO is right because by accepting the connection over SSL, you state that the conversation between the user agent and the server is meant to be private."

This is misguided thinking, pure and simple. Because of this line of thinking, your everyday webmaster has been convinced that encrypting data on a regular basis is more trouble than it's worth and allowed NSA (or the Chinese or the Iranian or what have you) authorities to simply put in a tap to slurp the entire internet without even going through the trouble of targeting and impersonating. Basically, this is the thinking that has enabled dragnet surveillance of the internet with such ease.


but as user I can understand that an http site is insecure, while a self signed certificate might lead me into a false sense of security.


That's the proffered reasoning as we all know. But the actual outcome (to quote rufb from this comment https://news.ycombinator.com/item?id=8625739)

    Encrypted (Certified)    COOL GREEN
    Encrypted (Self-Signed)  EVIL RED
    Unencrypted              NOTHING / NEUTRAL CHROME

 Tell me how the logic works here (for an average user).


Not considering the many holes, cyphersuites, running TLS 1.3+ etc.

( http://wingolog.org/archives/2014/10/17/ffs-ssl )

...it should probably look like this:

Safe against active attacks:

    Encrypted (Certified)    COOL GREEN
Safe against passive attacks:

    Encrypted (Self-Signed)  SCARY ORANGE
Safe against world peace, ie. UNSAFE:

    Unencrypted              EVIL RED


> Tell me how the logic works here (for an average user).

"Neutral Chrome" is the default state of the web -- the site doesn't assert that it should be trusted, and it shouldn't be, and that's the default state people should have in approaching the web.

"Cool Green" is "the site asserts that it has a particular identity and that communication with that identified site is private, and it passes the tests built into the browser's security model to verify all that."

"Evil Red" is "the site asserts that it has a particular identity and that communication with that identified site is private, but it fails the tests built into the browser's security model to verify all that."

Seems to me to be perfectly logical, even if we might prefer a better security model for making and verifying the claims at issue.


Plaintext is zero security.

Self-signed is a low probability of security.

Signed is a high probability of security.

This continuum makes more sense than the current state of affairs.


If someone forwards plaintext, it's called a proxy.

If someone forwards encrypted content on behalf of my server, it's called man-in-the-middle attack, and they should not be capable of doing it without the huge red flags.


Self-signed is a significant probability of man-in-the-middle attack.


I can self-sign a certificate for gmail, the browser correctly warns about potential BIG security issues with it.


> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".

That would be correct if you could assume that the NSA couldn't fake certificates for websites. But it can, so it's wrong and misleading. It's certificate pinning, notary systems etc. that actually give some credibility to the certificate you're currently using, not whatever the browsers indicate as default.

FWIW, (valid) rogue certificates have been found in the wild several times, CAs have been compromised etc. ...


I agree. A more common MITM, and that it actually would prevent, comes from a rogue wifi operator.


> FWIW, (valid) rogue certificates have been found in the wild several times, CAs have been compromised etc. ...

And it's only going to get worse as SHA-1 become more and more affordable to crack.


The CAs have agreed to stop using SHA-1 by 2016, and Let's Encrypt will launch with something stronger on day one.

But SHA-1 attacks are going to be a huge problem all over our protocol stack :(


The NSA has no CA. The only attack they really have is brute force or server compromise - both of which undermine pinning.


They can get US corporations (including many CAs) to cooperate. For example, to obtain a fake (but perfectly working google.com certificate, they can ask Google (more or less) nicely to provide one, or they can go ask any CA instead. It's not likely that compromise is required with so many potential sources, some of which may be paid or coerced to cooperate.

PS. nice (presumably political) downvote further up ...


The NSA can do this, yes. But, any CA that issues a fake CA for Google will be found out rather quickly, and then will get blacklisted and lose business.

So while the NSA can technically do that, they only get a few shots cause each one has a high chance of burning the CA.

For lesser sites and narrow targets, this may not be true.


This is precisely the problem with centralized security authorities. As we've seen a state actor can easily force a central authority to share it's private key, thereby granting the state actor the ability to untraceably create it's own certificate chains.

It would also have to control the wire for the attack target, but via wire tapping laws that is already a solved problem. Because they control the connection of the attack target, I don't see how the fact that the certificate chain was compromised would ever become public knowledge.

Web of trust was designed to address the central authority weakness, but itself apparently has scalability issues, although I'm unclear on why.


Google is indeed in a (unique) good position to detect and possibly prevent a fake certificate, but we don't know if that's what they want or whether they can be coerced to cooperate. Millions of other websites are not protected in the same way.


One would hope certificate transparency would help fix this problem.

(for the record, I didn't downvote you)


Fake certificate for Google wouldn't work in Chrome at least. There is certificate pinning already.


That is completely ineffective if they get Google to cooperate and issue an update that pins the new cert - and due to how automatic updates work, the majority of users will be completely oblivious, and those who do notice the new certificate won't find it any more suspicious than any other certificate update.


NSA has NSL (national security letters with gag orders). There are CAs in the US. Mission accomplished.


Wouldn't help with google though - anybody who tried to fake a google cert would be caught by chrome within a few seconds. There is a lot of value associated with owning a browser. Enhanced security is just one of them.


You speak as if the power of NSLs has a functional limit - it doesn't, which is what makes the entire concept so dangerous.

There's nothing stopping the requirements from being "mint us a certificate according to these specs" and additionally "okay, now pin this certificate in your browser".


You might want to read up on what an NSL actually is, since you and the GP are clearly very confused.


Explain, please.

What prevents an NSL from compelling Google from minting a new certificate (they are a CA), providing the keys to the bad guys, and distributing that certificate in Chrome? NSLs have been used in the past to compel positive action (c.f. Lavabit), so I really don't see how you think there's any practical limit to their power.

My understanding is that there isn't a limit. If I am wrong about this, then kindly reply directly here so we can all learn instead of giving the "read up on" non-answer.


An NSL can be used only to compel release of connection or transaction metadata, and cannot be used to compel disclosure of message contents. It's basically a fast-track for getting things like call records, and it most emphatically cannot be used to compel turning over a certificate or allowing a man-in-the-middle.

To my knowledge the exact details of the Lavabit case were never released, but from what has been released it's quite clear that the issue was regarding a warrant and a gag order, because the ensuing litigation wouldn't have been remotely applicable to an NSL (otherwise Lavabit's attorney would have won on a walk).

None of this is to say that I think NSLs should exist. In fact, I think they're a terrible idea. But the vast majority of discussions around them and similar topics is so grossly uninformed that it's impossible to take most people seriously on these subjects.


Okay, so not an NSL. Incorrect terminology pointing at the same awful effect, an unaccountable court issuing unchallengeable rulings that cannot be discussed.

No substantial difference from the concept I'm complaining about.


"Ignorance more frequently begets confidence than does knowledge."


It's a letter, issued by an occult kangaroo court, that coup d'etat forces hold in hand while demanding the keys to the kingdom - a demand that can't be challenged in a legitimate court of law.


I'm now curious. Explain to me how an NSL fits into the scenario you're implying.


That would be stupid. Google is a US company. NSA has NSLs. Mission accomplished. No certs involved.


How did you get Google into all this? If you're implying that Google owns a search site/Gmail/a browser, know that there are alternatives, which NSA's target could be using. A fake certificate from a trusted US CA can MITM any connection to almost any website from almost any browser.


That should have been a reply to the sibling comment, where it was implied this would be a strategy against Google.


Browsers shouldn't silently accept self-signed, but there is a class of servers where self-signed is the best we've got: connecting to embedded devices. If I want to talk to the new printer or fridge I got over the web, they have no way of establishing trust besides Tacking my first request to them.


I bought a camera the other day with the nifty feature of having an NFC tag embedded in it to guide your phone to launching (and installing, if necessary) the companion mobile app.

It occurred to me that this is a really good way of establishing a trust path: while they're only using it to guide you to the right app, they could embed a little public key in there. Then you could authenticate the new printer or fridge by physically being near it.

We'd have to extend our UIs a bit to cover these use cases (it should basically act like a trusted self-signed cert), and probably you only want to trust NFC certs for *.local.


Technically, there's no reason why a fridge couldn't have a signed cert tied to some dynamic DNS (e.g. <fridge-serial-number>.<manufacturer>.<tld>).


True, but on many small networks, you aren't addressing the embedded device by a FQDN.

All these appliances should let you change the cert on them, but you still need that initial connection, and at smaller organizations (or households) the certs will never ever be changed.

I used to work on embedded security projects so I care about this; I also realize that's a small portion of the market. I'm okay with making the people connecting to their new printer jump through a hoop in order to reduce the chances of someone hijacking www.paypal.comm but you still have to allow some way in.


True, but on many small networks, you aren't addressing the embedded device by a FQDN.

Why not?


Why should my fridge have a FQD name? What purpose does that serve?

Why install a firewall in each device if you can install one on the router that works for everything?


Why should my fridge have a FQD name? What purpose does that serve?

To allow you to create a signed certificate to authenticate it?

Why install a firewall in each device if you can install one on the router that works for everything?

Having an FQDN doesn't mean you need to install a firewall on your device. You can still use the router's, and even prevent any inbound connections from the WAN to the device.


NAT traversal?


FQDN doesn't have to mean publicly accessible. I have a personal subdomain that points to an internal IP. It's kinda weird to do with IPv4, but it works fine, and with IPv6 it'll be natural, since each device will probably have a globally unique address anyway, even if it can't be accessed outside of your LAN.


But note that only works if the manufacturer can choose the name without an issue from the customer. For things like network appliances in larger companies that aren't going to want [generic number]manufacturer.com but want [my name].corp.[my company].com, you're stuck.


Allow the cert to be configurable, then the company can use its internal CA to give certs to all its appliances.


Yes, that's the status quo, and has been for a while. The point is that's currently the best you can do. For boxes without external exposure, this work won't change anything, but a standardized protocol for dealing with boxes with external exposure would still help some use cases.


Oh god, they have internet fridges now? What on earth for?



>So when you accept the connection unencrypted, you tell the user agent "hey - everything is ok here - I don't care about this conversation to be private", so no error message is shown.

Maybe a security-conscious person thinks that, but the typical user does not knowingly choose http over https, and thus the danger of MitM and (unaccepted) snooping is at least as large for the former.

So it's somewhat debatable why we'd warn users that "hey, someone might be reading this and impersonating the site" for self-signed https but not http.


The use case for the CA system is to prevent conventional criminal activity -- not state-level spying or lawful intercept. The $200 is just a paper trail that links the website to either a purchase transaction or some sort of communication detail.

The self-signed cert risk has nothing to do with the NSA... if it's your cert or a known cert, you add it to the trust store, otherwise, you don't.


Private to the NSA and reasonably private to the person sitting next to you are different use cases. The current model is "I'm sorry, we can't make this secure against the NSA and professional burglars so we're going to make it difficult to be reasonably private to others on the network".

It's as if a building manager, scared that small amounts of sound can leak through a door, decided that the only solution is to nail all the office doors open and require you to sign a form in triplicate that you are aware the door is not completely soundproof before you are allowed to close it to make a phone call. (Or jump through a long registration process to have someone come and install a heavy steel soundproofed door which will require replacement every 12 months.)

After all, if you're closing the door, it's clearly meant to be private. And if we can't guarantee complete security against sound leaks to people holding their ear to a glass on the other side, surely you mustn't be allowed to have a door.


The person next to you in cafe can MITM a self-signed TLS connection just as easily as the NSA; and the NSA can probably MITM a CA-signed TLS session, since the U.S. government owns or has access to quite a few root certificates. So, "no self-signed certs" is really a measure to protect you from the lowest level of threat. Almost any attacker than can MITM http can MITM https with self-signed certs that you never verify in any way. Encryption without authentication is useless in communications.


Self-signed certificates are still better than http plain text. I understand not showing the padlock icon for self-signed certificates, I don't understand why you would warn people away from them when the worst case is that they are just as unsafe as when they use plain http. IMHO this browser behavior is completely nonsensical.


How would a browser know that the the self-signed certificate that was just presented for www.mybank.com is intended to be self-signed (show no error, but also show no padlock) or whether it's the result of a MITM attack because www.mybank.com is supposed to present a properly signed certificate (show error)?

How would you inform people going to www.mybank.com which is presenting a self-signed cert in a way that a) they clearly notice but that b) doesn't annoy you when you connect to www.myblog.com which also is presenting a self-signed cert?


If the user typed www.mybank.com, let the server redirect to https but don't show the lock icon if it's self-signed. This is no worse than an impostor that just doesn't redirect to https.

If the user typed https://www.mybank.com, show the usual warning for self-signed certificates.


How many people are careful to type "https" every time they visit a website? How many people pay close attention to the lock icon/color of the URL bar? This advice seems to ignore the existence of sslstrip [0] and related attacks, and the numerous countermeasures that have been designed to deal with this problem (e.g. HSTS).

[0] http://www.thoughtcrime.org/software/sslstrip/


This is EXACTLY what I want for my intranet sites. It lets me protect my users from the wireshark in the next cubicle.


The solution for this is to run your own CA internally and push out the cert to all the machines. (if you have byod stuff it makes it a little harder but you could still have an internal ca signing only a certain subdomain and get people ot install it)


But that don't protect you from a malicious user hijacking this domain in the next cubicle. Perhaps, if your switches are not properly configured , that the guy in the next cubicle ou do some arp spoofing and https://intranet.yourdomain would be served by a bogus server collecting passwords.

But your users won't notice the difference, because they are used to see the certificate warning on his browser.


How would a browser know that the fact that www.mybank.com doesn't use SSL at all is intended by the bank, or the result of a MITM attack? At the end of the day it all relies on the user seeing the (lack of) a padlock in his browser. So as long as you don't show a padlock (or a different kind of padlock) for www.mybank.com when the certificate is self signed, you're good.


You would have to simply install the certificate for the CA that signed the certificate. Self-signed just means that YOU are the CA.


No. Self-signed certificates are much worse because they bring a false sense of security.

A self-signed certificate is trivially MITMed unless you have a way to authenticate the certificate. At the moment CAs are the best known way to do that (and before anyone brings certificate pinning or WoT, they come with their own problems, please read this comment of mine https://news.ycombinator.com/item?id=8616766).

EDIT: You can downvote all you want but I'm still right.

Each time anyone repeats the "self-signed certificates are still better than HTTP plain text" lie is hurting everyone in the long run.

They're much worse, both for the users and from a security perspective. Self-signed certificates are evil unless you know exactly what you're doing and are in full control of both ends of the communication (in which case just trust it yourself and ignore the warnings).


The extent to which this is true depends on browser behavior. With some browser behavior self-signed certs could make some users safer against some threats; with other browser behavior they could make some users more vulnerable to some threats.

An opportunistic privacy solution with no legacy installed base to worry about is tcpcrypt:

http://www.tcpcrypt.org/

So if anyone wants to make progress on opportunistic unauthenticated encryption without having to fight about UA behaviors, tcpcrypt may be more fertile ground than self-signed certificates.


> With some browser behavior self-signed certs could make some users safer against some threats

How exactly? Did you read my linked comment?

As far as I can tell, self-signed certs are always a no-no. As soon as one is compromised and has to be revoked the whole system breaks apart.

The only situation where a self-signed certificate makes sense is when you control both ends of the communication and can revoke the cert on the client yourself.

In the age of WiFi, you can't dismiss active attacks.

EDIT: Again, whoever is downvoting can downvote all he wants but I'm still right. If I am not, prove it via comments, not downvotes, and we'll be able to discuss each other's views.

Even parent's Tcpcrypt link says it is vulnerable.

> By default Tcpcrypt is vulnerable to active attacks

> Tcpcrypt, however, is powerful enough to stop active attacks, too, if the application using it performs authentication.

How are you going to perform authentication via insecure channels without CAs?


Well, the best example I know of is proposals to do opportunistic upgrades from HTTP to HTTPS, for example via a browser header in the HTTP reply. If the browser performs the opportunistic upgrade, and negotiates an HTTPS connection behind the scenes, and doesn't tell the user that the connection was served over HTTPS, then accepting a self-signed cert invisibly in this context makes the user no worse off than not performing the upgrade (and better off against an adversary who's not currently performing an active attack).

I forgot what the current status of drafts proposing this is. Amazingly, I found that Rohit Khare described a form of this mechanism way back in 1998 (so it's not a super-new concept).


Although such scheme is indeed safer than HTTP (protects against passive attacks), what you're describing is not self-signed certificates, but merely encryption (with new random _unathenticated_ keys per session).

Keys would be exchanged via Diffie-Hellman as usual, but a certificate wouldn't be involved since it's useless anyways (you can't certify anything in such a scheme, why bother at all?) and thus would be vulnerable to active attacks.

Certificates imply long-term authentication. It's an important nuance since they are long-lived by definition, so they have to be trusted and revoked as needed, in which case we're still facing the problem I mentioned earlier.


I agree that the certificates don't serve any useful function in this scenario; they might be required pro forma, but they aren't actually doing anything helpful.


> A self-signed certificate is trivially MITMed unless you have a way to authenticate the certificate.

Trivial? Yes. As trivial as intercepting plain HTTP? No.

The NSA or adversary du jour can vacuum up anything sent over plain HTTP with zero risk. Self-signed HTTPS forces the attacker to commit some resources and, more importantly, run the risk of exposure. Security is not a binary (no encryption scheme is perfect), it's about increasing the cost to attackers.



HTTPS with self-signed certificates remains better than plain HTTP. The fact that you can propose an unimplemented, unstandardized, theoretical scheme that would offer the same advantages as HTTPS with self-signed certificates does not make HTTPS with self-signed certificates worse than plain HTTP.


Reddit discussion about this, with much of the same arguments there as here (and talking past each other just as much):

http://www.reddit.com/r/ProgrammerHumor/comments/2l7ufn/alwa...


The warning is designed to let people know that who you're talking to can't be proven, which is important when someone tries to impersonate a bank, or your email provider, or any other number of important sites.


CA-signed certificates don't prove you're talking to who you think you are either as any CA trusted by your browser/OS can sign any certificate.


Yes. That's not perfect. But it raises the bar for forgery to "can sign certificates as a root authority", which is still fairly high. (e.g. I can't do it, and neither can you.) It stops coffee shop/hotel wifi operators and mobile providers from injecting content into your session.

If we encourage users to blindly accept self-signed certificates (giving us end-to-end encryption but sacrificing identification), nothing would stop those actors from altering your HTTPS sessions as easily as they alter your HTTP sessions today. It's throwing the baby out with the bathwater.


You don't need a CA system to solve that problem, though. Take, for example, Convergence[0] which uses a notary system in place of the CA system.

[0] http://convergence.io


This is true for most sites now, but is being solved gradually, with hard-coded certificate pinning already shipping in Firefox and Chrome, and the HTTP Public Key Pinning extension coming soon.


But when you browse over http, you don't know who you're talking to either, so how are self-signed certificates worse than http?

I'm really having trouble figuring out the attack scenario unique to self-signed certificates that you don't have with plain http.


Security-wise, if they are both vulnerable to trivial exploits, how can you say one is "more secure" than another?


Because encryption with SSL without trust of the SSL cert is meaningless. It might as well be not encrypted.


I wonder if this is true.

If there's a man in the middle, then they can read the traffic. But others still have a problem.

With HTTP, you know that everyone can read the traffic.

I think unsigned certs, especially with pinning, can be used to make wholesale collection of internet traffic vastly more difficult.


Now you are talking about obscurity, not security. In my opinion.


Here's one thing that's NOT the solution: throwing out all encryption entirely. Secure vs insecurse is a gradient. The information that you're now talking to the same entity as you were when you first viewed the site is valuable. For example it means that you can be sure that you're talking to the real site when you log in to it on a public wifi, provided you have visited that site before. In fact, I trust a site that's still the same entity as when I first visited it a whole lot more than a site with a new certificate signed by some random CA. In practice the security added by CAs is negligible, so it makes no sense to disable/enable encryption based on that.

Certificates don't even solve the problem they attempt to solve, because in practices there are too many weaknesses in the chain. When you first downloaded firefox/chrome, who knows that the NSA didn't tamper with the CA list? (not that they'd need to)


Moxie Marlinspike's Perspectives addon for Firefox was a good attempt to resolve some of the problems with self-signed certs.

Unfortunately, no browsers adopted the project, and it is no longer compatible with Firefox. There are a couple forks which are still in development, but they are pretty underdeveloped.

I wonder if Mozilla would be more likely to accept this kind of project into Firefox today, compared to ~4 years ago when it was first released, now that privacy and security may be more important topic to the users of the browser.


The solution, at least for something decentralized, seems to be a web of trust established by multiple other identities signing your public key with some assumption of assurance that they have a reasonable belief that your actual identity is in fact represented by that public key.

That's what PGP/GPG people seem to do, anyway.

Why can't I get my personally-generated cert signed by X other people who vouch for its authenticity?


> no. It means "even though this connection is encrypted, there is no way to tell you whether you are currently talking to that site or to NSA which is forwarding all of your traffic to the site you're on".

Well... that's true regardless, as the NSA almost certainly has control over one or more certificate authorities.

But I agree with the sentiment. :)


It's interesting that your boogeyman in the NSA and not scammers. I think scammers are 1000X more likely. Escpecially since the NSA can just see the decrypted traffic from behind the firewall. There's no technology solution for voluntarily leaving the backdoor open.


> or to NSA which

Nah. The NSA, or any adversary remotely approaching them in resources, has the ability to generate certificates that are on your browser's trust chain. Self-signed and unknown-CA warnings suggest that a much lower level attacker may be interfering.


Just a small nitpick: I'm pretty sure the NSA has access to a CA to make it look legit.


> The solution is absolutely not to have browsers accept self-signed certificates though. The solution is something nobody hasn't quite come up with.

We do have a solution that does accept self-signed certificates. The remaining pieces need to be finished and the players need to come together though:

https://github.com/okTurtles/dnschain

If you're in San Francisco, come to the SF Bitcoin Meetup, I'll be speaking on this topic tonight:

http://www.meetup.com/San-Francisco-Bitcoin-Social/events/18...

Let's Encrypt seems like the right "next step", but we still need to address the man-in-the-middle problem with HTTPS, and that is something the blockchain will solve.


I totally agree that CAs are a racket. There's zero competition in that market and the gate-keepers (Microsoft, Mozilla, Apple, and Google) keep it that way (mostly Microsoft however).

That being said: Identity verification is important as the encryption is worthless if you can be trivially man-in-the-middled. All encryption assures is that two end points can only read communications between one another, it makes no assurances that the two end points are who they claim to be.

So verification is a legitimate requirement and it does have a legitimate cost. The problem is the LOWEST barriers to entry are set too high, this has become a particular problem when insecure WiFi is so common and even "basic" web-sites really need HTTPS (e.g. this one).


It is not a legitimate requirement.

HTTP can be man-in-the-middled passively, and without detection; making dragnets super easy.

In order for HTTPS self signed certs to be effectively man-in-the-middled the attacker needs to be careful to only selectively MITM because if the attacker does it indiscriminately clients can record what public key was used. The content provider can have a process that sits on top of a VPN / Tor that periodically requests a resource from the server and if it detects that the service is being MITM then it can shut down the service and a certificate authority can be brought in.

Edit: Also, all this BS about how HTTPS implies security is besides the grandparent's point: certificates and encryption are currently conflated to the great detriment of security, and they need not be.


> HTTP can be man-in-the-middled passively, and without detection; making dragnets super easy.

Nothing can be man-in-the-middled passively, that makes no sense. That isn't what a MitM is. It requires active involvement by its very nature.

> In order for HTTPS self signed certs to be effectively man-in-the-middled the attacker needs to be careful to only selectively MITM because if the attacker does it indiscriminately clients can record what public key was used.

I genuinely don't understand what you're trying to say.

> The content provider can have a process that sits on top of a VPN / Tor that periodically requests a resource from the server and if it detects that the service is being MITM then it can shut down the service and a certificate authority can be brought in.

If the MitM originates from a specific location (e.g. a single Starbucks, a single hotel, an airport, etc) it would never be detected by that method.

> Also, all this BS about how HTTPS implies security is besides the grandparent's point: certificates and encryption are currently conflated to the great detriment of security, and they need not be.

Only MitM protections AND encryption provide a secure connection when together. Individually they're insecure.

If someone wants to come up with a security scheme which doesn't depend on certificates that would be fine. You just have to solve the encryption issue (easy) and the identity issue (hard).


> Nothing can be man-in-the-middled passively, that makes no sense. That isn't what a MitM is. It requires active involvement by its very nature.

By this I mean record all form submissions done through HTTP.

>> In order for HTTPS self signed certs to be effectively man-in-the-middled the attacker needs to be careful to only selectively MITM because if the attacker does it indiscriminately clients can record what public key was used.

> I genuinely don't understand what you're trying to say.

The default thing we're trying to prevent is someone close to the server MITMing every request, recording each post, and reenacting them so that they are not discovered.

> If the MitM originates from a specific location (e.g. a single Starbucks, a single hotel, an airport, etc) it would never be detected by that method.

That is true for the example I gave which is just a proof-of-concept, but not true for a better method, like decentralization + public key signing.

What I'm fundamentally saying is that Cert + HTTPS is more secure, but it is not fully secure, since you have to trust the cert provider. Just in the same way, HTTPS without cert is not fully secure, but it is (much) more secure than HTTP.


>man-in-the-middled passively

"eavesdropped" is the word you're looking for.


I think NSA was calling it Man On The Side? Or was that something different?


It's slightly different. QUANTUM man-on-the-side deployments can always read packets and inject packets, but it appears cannot stop packets getting through or change them en route.

Deployments in the wild appear to use cable splitters to read, so often have no direct write access due to transport layer limitations and sometimes deliberate "Data Diode" one-way firewalls on the hot pipe (just in case?); they communicate with instrumented boxes closer to 'home' on a management network, which do not have to be on-path themselves, some of which may well be hacked routers, to do packet injection. C&C was centralised pingbacks, but that lost races (typical latency: 670ms-ish) so is now distributed (with QUANTUMFIRE).

They can use that knowledge and capability together to race to control a TCP connection, after which the real packets will be discarded by the target endpoint (because the seq is "wrong"), after which they are fully man-in-the-middle and can inject redirection headers (QUANTUMINSERT), tracking cookies (QUANTUMCOOKIE) or infect downloaded executables (QUANTUMCOPPER); they can also inject RSTs to force TCP connection resets (QUANTUMSKY; also used by Blue Coat, the .cn Golden Shield, and many others).

Note this implies that they are detectable and locatable, if you know what to look for.

(Sorry I can't be much more helpful without going in and taking one, and I think they would very strongly disapprove of that. <g>)


That's all good in theory, but there have been demonstrated attacks against man-in-the-middle-able protocols and we've lacked the ability to respond usefully, precisely because the protocols were designed to be man-in-the-middle-able. Everyone knows it's happening and it's even easier to detect than your example, but there's nothing useful to do with that knowledge other than complain.

https://www.eff.org/deeplinks/2014/11/starttls-downgrade-att...


All the attacker needs to do is target the "CA" of the target.

For example, in an individual user situation, if the "CA" is a mac user, you use a local exploit, and export the private key from the Keychain. Done.


That's the standard motivation for CAs, but I don't buy it.

Most of the time, I'm much more interested in a domain identity than a corporate identity. If I go to bigbank.com, and is presented with a certificate, I want to know if I am talking to bigbank.com -- not that I'm talking to "Big Bank Co." (or at least one of the legal entities around the world under that name).

Therefore it would make much more sense if your TLD made a cryptographic assertment that you are the legal owner of a domain and that this information could be utilized up the whole protocol stack.

That would not have a legitimate cost, apart from the domain name system itself.


Without some kind of authentication, the encryption TLS offers provides no meaningful security. It might as well be an elaborate compression scheme. The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session. But of course, real attackers trivially see all the the traffic for a session, because they snare attackers with routing, DNS, and layer 2 redirection.

What's especially baffling about self-signed certificate advocacy is the implied threat model. Low- and mid-level network attackers and crime syndicates can't compromise a CA. Every nation state can, of course (so long as the site in question isn't public-key-pinned). But nation states are also uniquely capable of MITMing connections!


>The only "security" derived from unauthenticated TLS presumes that attackers can't see the first few packets of a session

Could you elaborate here? With a self-signed cert, the server is still not sending secret information in the first few packets; it just tells you (without authentication) which public key to use to encrypt the later packets (well, the public key to encrypt the private key for later encryption).

The threat model would be eavesdroppers who can't control the channel, only look. Using the SS cert would be better than an unencrypted connection, though still shouldn't be represented as being as secure as full TLS. As it stands, the server is either forced to wait to get the cert, or serve unencrypted such that all attackers can see.


There are no such attackers.


Do you think that with public key pinning self-signed certs begin to make sense? Also, do you feel that CAs and the PKI system do provide appropriate authentication (this being a cost-benefit rather than a 100%-correctness analysis)?


Yes! Key continuity is a legitimate identity scheme; the only trick is to implement it scalably, so it actually happens, rather than being a fig leaf (an unworkable variant of key continuity already exists in browsers today).

I think the CA system by itself is inadequate, but unlike unauthenticated TLS, actually does provide some security.


You're saying that everyone able and willing to passively snoop, is also able and willing to compromise the channel and mimic the server?


Correct.


Then I don't see how that would be true. Mimicking a server requires significantly more effort that simply storing the traffic. So even if someone were able, it doesn't follow that they would want to go through that effort in every case.


I'm not entirely sure I understand your point, so if I misunderstood you please correct me.

First, TLS has three principles that, if you lose one, it becomes essentially uselsss:

1) Authentication - you're talking to the right server

2) Encryption - nobody saw what was sent

3) Verification - nothing was modified in transit

Without authentication, you essentially are not protected against anything. Any router, any government can generate a cert for any server or hostname.

Perhaps you don't think EV certs have a purpose - personally, I think they're helpful to ensure that even if someone hijacks a domain they cannot issue an EV cert. Luckily, the cost of certificates is going down over time (usually you can get the certs you mentioned at $10/$150). That's what my startup (https://certly.io) is trying to help people get, cheap and trusted certificates (sorry for the promotion here)


Encryption without verification is not useless; it protects against snooping.


It doesn't prevent snooping -- you can still be MITM'd. It does however, make snooping much harder because it has to be done actively.


If you don't verify what is sent, I could easily send you a malicious web form. If you don't verify the key or cert behind the connection, anyone can claim to be x site.


Stopping snooping by encrypting without strictly checking certificates the first time you connect is better than not using encryption because it stops dragnet surveillance.

Also, active attacks (like MITM) are harder to do and easier to detect than passive attacks (snooping).


That would make dragnet surveillance easier. Just MITM everything and you'll be the Trusted Source™ for all traffic.


No, that does not make dragnet surveillance easier. Dragnet surveillance depends on not being easily detectible. However, a SSL MITM attack is easily detected, as it changes the fingerprint of the SSL-key of the site you're talking too. By recording fingerprints and comparing them over time or for different users, or directly contacting the site's operator (using a secure communication channel, e.g. meeting him in person), the existence of a MITM is easily proven.

BTW what you call "dragnet surveillance" is better described as "Pervasive Monitoring", see also RFC7258 "Pervasive Monitoring Is an Attack" [1].

[1] http://tools.ietf.org/html/rfc7258


Nobody’s suggesting that self-signed certs be treated as trusted or CA-cert equivalent, only that they not be regarded as worse than unencrypted http. In the proposals being discussed, that attack would no more of a threat than MitMs currently are against http.


The warning pages are really ridiculous. Why doesn't every HTTP page show a warning you have to click through?

But it's not like MITM attacks are not real. CAs don't realistically do a thing about them, but it is true that you can't trust that your connection is private based on TLS alone. (unless you're doing certificate pinning or you have some other solution).


You're absolutely right. From first principles, HTTP should have a louder warning than self-signed HTTPS.

Our hope is that Let's Encrypt will reduce the barriers to CA-signed HTTPS sufficiently, that it will become realistic for browsers to show warning indicators on HTTP.

If they did that today, millions of sites would complain, "why are you forcing us to pay money to CAs, and deal with the incredible headache of cert installation and management?". With Let's Encrypt, the browsers can point to a simple, single-command solution.


Thanks for doing this. It's really great and its something that clearly needs to happen.

The next step will be to replace the CA system with something actually secure, but that comes after we move the web to a place where most websites are at least trying.


We'll be in a position to deploy defenses like pinning (http://www.ietf.org/id/draft-ietf-websec-key-pinning-21.txt) for site operators who want more protection against the structural problems of the CA system. That will need to be implemented with care, but it should be possible.


Because HTTP does not imply security, HTTPS does. Without proper certificates, these guarantees are diluted; hence the warnings.


Why doesn't every HTTP page show a warning you have to click through?

Back in the Netscape days, it did. People got tired of clicking OK every time they searched for something.


Eventually maybe the browsers will do that. Currently far too many websites are HTTP-only to allow for that behavior, but if that changes and the vast majority of the web is over SSL it would make sense to start warning for HTTP connections. That would further reduce the practicality of SSL stripping attacks.


It's not enough to keep the snoops out - you need to KNOW you're keeping the snoops out. That's what SSL helps with. A certificate is just a key issued by a public (aka trusted) authority. Sites can also choose to verify the certificate: if this is done, even if a 3rd party can procure a fake cert, if they don't have the same cert the web server uses, they can't snoop the traffic.

Site: Here's my public key. Use it to verify that anything I sent you came from me. But don't take my word for it, verify it against a set of trusted authorities pre-installed on your machine.

Browser: Ok, your cert checks out. Here's my public key. You can use it for the same.

Site: Ok, now I need you to reply this message with the entire certificate chain you have for me to make sure a 3rd party didn't install a root cert and inject keys between us. Encrypt it with both your private key and my public key.

Browser: Ok, here it is: ASDSDFDFSDFDSFSD.

Site: That checks out. Ok, now you can talk to me.

This is what certificates help with. There are verification standards that apply, and all the certificate authorities have to agree to follow these standards when issuing certain types of SSL certificates. The most stringent, the "Green bar" with the entity name, often require verification through multiple means, including bank accounts. Certificate authorities that fail to verify properly can have their issuing privileges revoked (though this is hard to do in practice, it can be done).


Here's some comparison screenshots of the "bling" that is being described (hard to even tell that some of these sites are SSL'd without getting the EV)

https://www.expeditedssl.com/pages/visual-security-browser-s...


I'm pissed off 'cos I'm on the board for rationalwiki.org and we have to pay a friggin' fortune to get the shiny green address bar ... because end users actually care, even as we know precisely what snake oil the whole SSL racket is. Gah.


I'm all for CAs to burn in a special hell. The other cost, though, was always getting a unique IP. Is that still a thing? Has someone figured out multiple certificates for different domains on the same IP? Weren't we running out of IPv4 at some point?


Yes, there are two main mechanisms, each with its own limitations.

https://en.wikipedia.org/wiki/SubjectAltName https://en.wikipedia.org/wiki/Server_Name_Indication


The thing is, without a chain of trust, the self-signed certificate might be from you or it might be from the "snoops" themselves. Certificates that don't contain any identifying information are vulnerable to man-in-the-middle attacks.


I have some certificates through RapidSSL, and when they send me reminders to renew, the e-mails come with this warning:

"Your certificate is due to expire.

If your certificate expires, your site will no longer be encrypted."

Just blatantly false.


They might as well say something even more ominous: "If your certificate expires, your site will no longer be accessible."

Of course, we know that's not true either, but try explaining to your visitors how to bypass the security warning (newer browsers sure don't make it obvious, even if you know to look for it).


I just bought a cert on Saturday for $9. It's less than the domain name.


$9 is a big step up from free, which is what the rest of my blog costs.


Is your blog a .tk site? Where else would you get a free domain?


Can get them free for web use. Not sure where he is coming from.


Wildcard SSL certs are ~$100/year. Those have always been much more of a racket, but they're so worth the extra cost to set them up once on your load balancers and not have to think about SSL certs again for 5+ years.


> 200 bucks for us to say he's cool

There are trusted free certificates as well, like the ones from StartSSL.

> if a bank pays 10,000 bucks for a really cool verification, they get a giant green pulsating URL badge

Yeah, $10 000 and legal documentation proving that they are exactly the same legal entity as the one stated on the certificated. All verified by a provider that's been deemed trustworthy by your browser's developers.

Finally, if a certificate is self-signed, it generally should be a large warning to most users: the certificate was made by an unknown entity, and anybody may be intercepting the comunication. Power-users understand when self-signed CAs are used, but they don't get scared of red warnings either, so that's not an issue.


This certificate industry has been such a racket. It's not even tacit that there are two completely separate issues that certificates and encryption solve. They get conflated and non technical users rightly get confused about which thing is trying to solve a problem they aren't sure why they have.

But a man-in-the-middle attack will remove any secrecy encryption provides and to prevent that, we require certificate authorities to perform some minimal checks that public keys delivered to your browser are indeed the correct ones.

You've got a point about how warnings are pushing incentives towards more verification, but they serve a purpose that aligns with secrecy of communication.


Wasn't WOT (Web Of Trust) supposed to fix this? Basically, I get other people to sign my public key asserting that it's actually me and not someone else, and if enough people do that it's considered "trusted", but in a decentralized fashion that's not tied to "authorities"?


no it means a trusted third party has not verifed who you are connecting to is who he/she says they are


Perhaps you should understand a system before slandering it? As others have said, encryption without authentication is useless.

Running a CA has an associated cost, including maintenance, security, etc. That's what you pay for when you acquire a certificate. Whether current market prices' markup is too high would be a different question, but paying for a certificate is definitely not spending 200$ to look cool.

CAs are the best known way (at the moment) to authenticate through insecure channels (before anyone brings pìnned certs or WoT, read this comment of mine: https://news.ycombinator.com/item?id=8616766)

EDIT: You can downvote all you want but I'm still right. Excuse my tone, but slandering a system without an intimate understanding of the "how"s and the "why"s (i.e. spreading FUD) hurts everyone in the long run.


That's the third comment of yours in which I've seen you taunt downvoters via edits in this thread alone. That's why I'm downvoting you. Knock it off, please.


I'm sorry it came across as a taunt, I didn't mean it like that.

Downvote sprees without an explanation detract from healthy discussion since they basically mean "I'm so mad about how wrong you are that I don't even care about why you think you are right".

I guess I'll just ignore them...


Do please ignore them. That's what the HN guidelines ask.


This is awesome! It looks like what CACert.org set out to be, except this time instead of developing the CA first and then seeking certification (which has been a problem due to the insanely expensive audit process), but the EFF got the vendors on board first and then started doing the nuts and bolts.

This is huge if it takes off. The CA PKI will no longer be a scam anymore!!

I'd trust the EFF/Mozilla over a random for profit "security corporation" like VeriSign any day of the week and twice on Sunday to be good stewards of the infrastructure.


I don't see how this actually keeps the CA PKI from being a scam. While I personally trust the EFF & Mozilla right now, as long as I can't meaningfully revoke that trust, it's not really trust and the system is still broken.


You can revoke your trust in any CA at any time, you don't even need to see any errors! Just click the little padlock each time you visit a secure website and see if the CA is in your good books. If it's not, pretend the padlock isn't there!

OK, that's a little awkward. A browser extension could automate this. But in practice, nobody wants to do this, because hardly anyone has opinions on particular CAs. It's a sort of meta-opinion - some people feel strongly they should be able to feel strongly about CAs, but hardly anyone actually does. So nobody uses such browser extensions.


Can't you just delete the CA from the browser?

On Firefox it's preferences -> advanced -> certificates -> view certificates.


Yes you can. Obviously, you can choose not to make secure connections with sites certified by a CA you don't trust. But then you just can't use your bank's website anymore, or your search engine, or whatever.

Users have a clear stake in whatever informational exchange occurs between them and the websites we access. We should have the authority to participate in determining the terms on which that exchange is secured.


I'm curious as to whether Firefox's sync functionality propagates CA overrides across machines. If not then this is something you'd have to repeat over for every machine you use, making it effectively too tedious to be practical.


It doesn't yet, unfortunately. There's a related feature request for syncing user added certificates:

https://bugzilla.mozilla.org/show_bug.cgi?id=583935

But syncing which certificates to delete is probably a much harder sell.

At least there's a way to do programmatically:

    apt-get install libnss3-tools
    certutil -d /home/$USER/.mozilla/firefox/$FIREFOX_PROFILE -D -n $TARGET_CA_NAME


>A browser extension could automate this.

Unfortunately, it couldn't on Chrome, because you can't even access a page's certificate from an extension in Chrome:

http://stackoverflow.com/questions/18689724/get-fingerprint-...

And Firefox's certificate API is not much better, only passive access without ability to block connections if you detect an unwanted cert.


> And Firefox's certificate API is not much better, only passive access without ability to block connections if you detect an unwanted cert.

Nope. Firefox's Addon API lets you do pretty much whatever you want. It might be kind of hard and annoying, but you can certainly block connections that are signed by an untrusted CA. How do you think Convergence [0] worked?

[0] http://convergence.io/


Fair enough, that's what I get for believing a Stackoverflow answer (even a highly upvoted one) without verifying for myself:

https://developer.mozilla.org/en-US/Add-ons/Overlay_Extensio...

So with Firefox, you could build the kind of add-on described by Mike.

But I have confirmed for myself Chrome extension API's lack of ability to even read the certificate of a current page[1]. Chrome may be able to read block page loads (don't know, haven't checked) but without being able to even view a cert, it doesn't do much good.

1. https://code.google.com/p/chromium/issues/detail?id=93636


How does converge work? Is it any good?


Can't you just remove the cert from your OS/browser's trust store? I can do this on Ubuntu + Firefox.

Incidentally, I can also add my own CA.


Looking at the spec [0] I'm concerned about the section on 'Recovery Tokens'.

"A recovery token is a fallback authentication mechanism. In the event that a client loses all other state, including authorized key pairs and key pairs bound to certificates, the client can use the recovery token to prove that it was previously authorized for the identifier in question.

This mechanism is necessary because once an ACME server has issued an Authorization Key for a given identifier, that identifier enters a higher-security state, at least with respect the ACME server. That state exists to protect against attacks such as DNS hijacking and router compromise which tend to inherently defeat all forms of Domain Validation. So once a domain has begun using ACME, new DV-only authorization will not be performed without proof of continuity via possession of an Authorized Private Key or potentially a Subject Private Key for that domain."

Does that mean, if for instance, someone used an ACME server to issue a certificate for that domain in the past, but then the domain registration expired, and someone else legitimately bought the domain later, they would be unable to use that ACME server for issuing an SSL certificate?

[0] https://github.com/letsencrypt/acme-spec/blob/master/draft-b...


This is a question about the policy layer of the CA using the ACME protocol.

The previous issuing CA should have revoked the cert they issued when the domain was transferred. But a CA speaking the ACME protocol might choose to look at whois and DNS for additional information to decide whether it issues different challenges in response to a certification request.


It's possible that this question shouldn't be decided one way or another in the specification, since it will ultimately be more a matter of CA policy about how the CA wants to handle automated issuance and risks.


I suppose they could check WHOIS at a regular interval to check whether a domain secured by one of their certs has expired, and update the state of the ACME server accordingly?


Free CA? This is cool. Why this wasn't done a long time ago is beyond me. (Also please support wildcard certs)

An interesting thing happened at a meet-up at Square last year. Someone from google's security team came out and demonstrated what google does to notify a user that a page has been compromised or is a known malicious attack site.

During the presentation she was chatting about how people don't really pay attention to the certificate problems a site has, and how they were trying to change that through alerts/notifications.

After which someone asked that if google cared so much about security why didn't they just become a CA and sign certs for everyone. She didn't answer the question, so I'm not sure if that means they don't want to, or they are planning to.

What privacy concerns should we have if someone like goog were to sign the certs? What happens if a CA is compromised?


It wasn't done a long time ago because running a CA costs money (which is why they charge for certificates), so whoever signs up to run one is signing up for a money sink with no prospect of direct ROI, potentially for a loooooong time. This new CA is to be run by a non-profit that uses corporate sponsorship rather than being supported by the market; whether that's actually a better model in the long run is I suppose an open question. But lots of other bits of internet infrastructure are funded this way, so perhaps it's no big deal.

There aren't a whole lot of privacy concerns with CA's as long as you use OCSP stapling, so users browsers aren't hitting up the CA each time they visit a website (Chrome never does this but other browsers can do).

Re: CA compromise. One reason running a CA costs money is that the root store policies imposed by the CA/Browser Forum require (I think!) the usage of a hardware security module which holds the signing keys. This means a compromised CA could issue a bunch of certs for as long as the compromise is active, but in theory it should be hard or impossible to steal the key. Once the hackers are booted out of the CA's network, it goes back to being secure. Of course quite some damage can be done during this time, and that's what things like Certificate Transparency are meant to mediate - they let everyone see what CAs are doing.


> imposed by the CA/Browser Forum require (I think!)

That's something imposed by the audit criteria (WebTrust/ETSI). What you detailed is also why roots are left disconnected from the internet - if you compromise an intermediary, that can be blacklisted as opposed to the entire root.


I'm curious. Whats the biggest cost in running a CA? As in, what makes those certs so expensive?


Ensuring physical security of CA private keys is expensive. This requires things like sturdy padlocks, closed-circuit security cameras, and up-to-date hardware and software.

These are the things you pay for when you buy a certificate from a CA. In fact, I would be 100% opposed to obtaining my website's cert from a CA if it were free-of-charge, because I know good physical security is expensive. However, I already trust the EFF and the Umich researchers (and their assurances of physical security), so I'm absolutely happy with obtaining a free certificate from them.


.... also, you need multiple people in the organisation, you typically need to write your own infrastructure for vending certs, billing, you need to run OSCP responders and perhaps CRLs so clients can check if the cert was revoked, that can take a lot of bandwidth, then you need support staff because when people are paying, they expect support, etc.


Your mileage may vary, but the biggest upfront cost is the WebTrust audit. Certly got quoted $150k for a reasonable root and its subordinates. This is a yearly cost. HSMs are not cheap either, plus you have to host them securely, hire validation staff, etc...


> Why this wasn't done a long time ago is beyond me.

While probably not officially scriptable, free certificates have been available since a long time ago: https://www.startssl.com/?app=1

Also, no free wildcard certs. Which I really want.

> What happens if a CA is compromised?

Looking at past compromises, if they have been very irresponsible they are delisted from the browsers' list of trusted roots (see diginotar). If they have not been extremely irresponsible, then they seem to be able to continue to function (see Comodo).

https://en.wikipedia.org/wiki/DigiNotar#Refusal_to_publish_r... https://blogs.comodo.com/uncategorized/the-recent-ra-comprom...


I'll run a free CA right now. Who wants a cert for microsoft.com?

NB: This is a bit unfair, because the existing for-money CAs haven't always stopped someone from registering microsoft.com.


You raise a good point though, SSL/TLS Certs are trying to deal with two separate problems:

1. Over the wire encryption (which this handles)

2. As a bad, but the best we've got site identification system for stopping phishing mechanism.

Currently, for even the cheapest certs (domain+email validated) - the CAs will reject SSL cert requests for anything that might be a phishing target. Detecting "wellsfargo.com" is pretty easy, where it gets tricky is things like "wellsforgo.com", "wellsfàrgo.com" etc. Which if I'm looking at this right will just sail through with LetsEncrypt.

I suspect we're going to actually end up with two tiers of SSL certs as the browser makers have started to really de-emphasize domain validated certs [1] like this vs the Extended Validation (really expensive) certs, to the point where in most cases now having a domain cert does not know green (and maybe doesn't even show a lock) at all.

As a side note, Google had announced that they were going to start using SSL as a ranking signal [2] (sites with SSL would get a slight bump in rankings), from this perspective the "high" cost of a cert was actually a feature as it made life much more expensive on blackhat SEOs who routinely are setting up hundreds of sites.

1 - Screenshots: https://www.expeditedssl.com/pages/visual-security-browser-s...

2 - http://googlewebmastercentral.blogspot.com/2014/08/https-as-...


If you can make microsoft.com serve up the correct challenge response, you'll be able to get a cert for them issued by the this project. This isn't a pure rubber-stamping service.


There are also going to be controls to limit automated issuance for domains with existing certs, among other criteria.


> Free CA? This is cool. Why this wasn't done a long time ago is beyond me. (Also please support wildcard certs)

There have been previous attempts, e.g. http://www.cacert.org/

AFAIK they failed in the politics front (getting accepted in mainstream browsers). Sounds like EFF might have better leverage.


I think the issue of whether or not there should be a wide new industry borne on the back of the CA architecture, its all a bit of a red-herring, anyway. This is only security at the web browser: do we trust our OS vendors to be CA's, too? If so, then I think we may see a cascade/avalanche of new CA's being constructed around the notion of the distribution. I know for sure, even if I have all the S's in the HTTP in order, my machine itself is still a real weak point. When, out of the box, the OS is capable of building its own certified binaries and adding/denying capabilities of its build products, inherently, then we'll have an interesting security environment. This browser-centric focus of encryption is but the beachhead for more broader issues to come, methinks; do you really trust your OS vendor? Really?


If each domain name can get a non-wildcard cert for free, quickly, why do you need wildcard certs? For multi-subdomain hosting on one server? Just wondering.


For my previous use cases, it's ideal for dynamically created subdomains of a web application. If I know ahead of time, it's easy to grab a cert for any subdomain. However if a user is creating subdomains for a custom site or something similar, it's much nicer/easier to have the wildcard cert.


The lets-encrypt demo makes it look like you could easily script cert acquisition for new subdomains. And the CA domain validation appears to be totally automated (and fast).


The downside is that now I have to manage and deal with multiple certs for all of my sub-domains, rather than dealing with a single cert/key pair.


Lots of services create dynamic subdomains in the form of "username.domain.com". To offer SSL on those domains without a wildcard certificate, you'd need to obtain a new certificate and a new IPv4 address every time a user signs up. You also need to update configuration and restart the web server process.


You don't need a new IPv4 address for each cert. That's for Windows XP. Just stop giving a shit about XP and use SNI. Problem solved.


Try telling that to any business. XP's marketshare worldwide is between 10-20% according to some metrics (cursory google result: http://www.netmarketshare.com/operating-system-market-share....)

There are very few companies out there that are okay with serving 1/5th of their potential customers an error page, and for good reason.


Looking at a recently created map by cloudflare (http://blog.cloudflare.com/introducing-universal-ssl/ http://cloudflare.github.io/sni-visualization/) it looks that a large portion of that seems to come from china.

A quick glance over EU countries reveals that more than 91% of potential users support SNI.

It might depend on your line of business but I think for some entities this might be a viable option


Google is a CA, and they sign their own certs as "Google Internet Authority G2" under SHA fingerprint BB DC E1 3E 9D 53 7A 52 29 91 5C B1 23 C7 AA B0 A8 55 E7 98.


They're subordinate under another CA (GlobalSign), and presumably contractually obligated to only sign their own certs. GlobalSign offers the following service to anyone willing to pay the sizable fee, undergo a sizable audit, comply by the CA/Browser forum rules, and only issue certs to themselves:

https://www.globalsign.com/certificate-authority-root-signin...

There are a few other vendors that I've seen offer similar services.


I couldn't be happier about the news, the EFF and Mozilla always had a special place in my heart. However, the fact that we have to wait for our free certificates until the accompanying command line tool is ready for prime time seems unnecessary. Another thing I'm interested in is whether they provide advanced features like wildcard certificates. This is usually the kind of thing CA's charge somewhat significant amounts of money for.


The thing that's causing the delay is not the client software development, it's the need to create the CA infrastructure and then perform a WebTrust audit. If we were ready on the CA side to begin issuing certificates today, we would be issuing them today.


I think I may have misunderstood all of you. Is the audit process itself really that time consuming? I can imagine the amounts of bureaucracy involved, but I can't image this takes much longer than, say, a month or so. Most of the time is probably spent waiting for someone or something, right? I mean we're talking about very capable people here who have done this kind of thing before.


You are lucky to not have had to deal with corporate beuracracy - these things take time :-) At work I'm integrating an API for a mobile operator, it's apparently working and ready to be used however I've been waiting a couple of months to get all the documentation and everything setup.

Even once they have the CA it needs to be added to browsers which will take time. Taking into account release cycles of embedded devices (read phones where the manufacturer hasn't released an update), summer 2015 seems rather optimistic.


The CA will be cross-signed, so it does not need to be added to browsers right away in order to be accepted. It will be treated as an intermediate cert, not a root cert, by all mainstream browsers at the outset.

But there is a lot of paperwork to be done, and a lot of engineering to be done, and a lot of things to buy and people to hire, in order to get a CA operating.


The CA is audited for six months. During this time frame, auditors collect info about the CA, and mainly what happens with it during the six months. Sometimes an auditor will issue a "readiness" statement to help the root get included before the audit closes, but it doesn't seem like they'll need that here.


I doubt the actual CA has been setup either. They're setting up their own root while cross signing from IdenTrust, that's not a one day activity. Auditors have to be present, software has to be designed and tested, etc.


It's true that this shouldn't be done in a day, but it's trivial compared to building a command line tool that automatically configures HTTP servers and designing an open protocol that issues and renews certificates. This is especially true if one of your partners is a CA.

---

Let me be clear here: I'm not complaining that I don't get my free cake now. I do think however that most people at the EFF and Mozilla would agree that we needed something like this a couple of years ago. In that context I think it's a least noteworthy that they decided to wait until other parts of the system are ready.


:-) It's technically easier but organizationally more difficult and time consuming.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: