Hacker News new | past | comments | ask | show | jobs | submit login
Maintaining digital certificate security (googleonlinesecurity.blogspot.com)
295 points by ayrx on July 8, 2014 | hide | past | favorite | 122 comments



Software vendors shouldn't list CAs as trusted when they prove they can't be trusted - but removing a CA from the trust store breaks things for innocent websites who just chose a crappy CA.

Every CA should be required to publish a signed, public list of every certificate they have issued that is currently valid; and no certificate should be considered valid if it isn't on a CA's public list of certificates.

That way, when a CA fucks up like this, vendors could remove their certificates from the root stores, but could grandfather in all their previous certificates so the CA's customers have a few months to get a certificate from a decent CA. We could even use the list to contact all the CA's customers and advise them of the upgrade deadline.

If this CA isn't removed from the root store, it sends a message to other CAs: You can issue bad certificates with impunity, and there will be no negative consequences.


I'm not sure that you actually want to avoid punishing people for choosing crappy CAs. That causes moral hazard, because you're not risking anything yourself by choosing a crappy CA, you're just spreading the risk among everyone else on the internet, because to avoid punishing you, they're accepting certificates issued from a known bad actor.

Doing it this way is actually a good way to handle product differentiation, because generally people won't make browsing choices based on certificate brand name, but commercial website owners will choose a better CA if it means that CA is less likely to get de-certified.


On the other hand, you want to make sure CAs will keep disclosing problems like these when they become aware of them, or they will certainly do anything in their power to keep breaches silent to avoid a certain death of the company.


Well, that was kinda a big difference between Diginotar and Comodo. Diginotar got hacked, knew about it, then tried to mitigate without telling anyone. Comodo got hacked and disclosed what they knew ASAP. Diginotar was pulled from the trusted root stores and went out of business and Comodo wasn't.

There were probably other reasons for the differential treatment (the scope of the Diginotar hack was bigger, for example), but I think that a lot of them are essentially strong correlates of the disclosure policy (i.e. if you have other responsible security practices, you're more likely to have a good disclosure policy).


Once the death-of-the-company option is on the table, it gives the CA a reason to comply with any lesser punishment.

For example fines, changes to procedures, mandatory security audits, agreeing not to issue certain types of certificate etc - the CA is going to do their best to comply if the alternative is to be put out of business.

Whether we want browser vendors / an industry standards committee to have that much power is another matter, of course.


CA's already have to go through mandatory audits and the procedures they must follow are spelled out.

Unfortunately there's no real procedure you can put in place to prevent hacking, as the NSA discovered to their cost.


Excellent point. Relieving people of their responsibility to think about who they should trust makes it easier for the untrustworthy to operate.

If we told people "no matter who swindles you, we'll make you whole", everybody would invest in ponzi schemes.


Your average person probably picks their CA because it's part of their hosting company, who probably resells someone else's CA largely because they're cheap.

You'd be punishing a lot of people because they don't know better for something that really should be built into dns and http.


This just moves the problem down one level and doesn't change anything. If you're a host you only have an incentive to choose a trustworthy CA if by choosing a lesser CA you're risking a significant problems for their users (who will switch hosts when they have a really bad experience and suddenly can't take credit cards anymore).


> Software vendors shouldn't list CAs as trusted when they prove they can't be trusted - but removing a CA from the trust store breaks things for innocent websites who just chose a crappy CA.

It seems like the way to fix this is to phase out bad CAs over time. And since certificates already have expiration dates and CAs typically don't issue certificates for longer than a couple years this should be easy: You don't trust any certificate signed by that CA which expires after a date two years from when you blacklist them. So the certificate some website got six months ago keeps working until it expires in a year and a half, but the certificate someone gets from them tomorrow fails immediately because it expires two years and a day after you removed the CA, which the site operator will discover in their testing and use a different CA before it reaches production.


I agree that sounds like it would work, and it would avoid the hassle of dealing with a giant file listing thousands of certificates.

It depends on how worried you are about (a) proactively contacting site administrators; (b) stopping the CA issuing anything at all, instead of allowing them to issue shorter and shorter certificates as the end date approaches - and thereby demonstrating clear, immediate justice instead of slow, drawn-out justice; (c) 10 year certificates [1]; and (d) allowing website owners to proactively check for accidentally-issued certificates by monitoring the certificate lists.

(admittedly, monitoring the certificate lists wouldn't help if the bad certificate wasn't on the main signed public cert list, and people being MITMed were served a different signed public cert list. So this wouldn't be safe against CA insiders or CIA warrants)

[1] Really. https://www.positivessl.com/ssl-certificate-positivessl.php


> 10 year certificates

Really browsers should just discard anything that isn't a root CA and expires more than two years from now as a matter of principle. Consider the consequences of issuing a certificate that far out on domain ownership transfers: Facebook bought facebook.com in 2005. The previous owner could still have a signed certificate for it.


That's useless. A compromised CA could create a certificate for an arbitrary start and expiration date. So this only penalises users who have had a certificate legitimately, and doesn't work at all if the CA's crown jewels are stolen.


His idea deals with the problem of CAs having no incentive to do their job properly: they won't lose their ability to issue certs.

You are addressing the other problem: CRLs essentially.


> That's useless. A compromised CA could create a certificate for an arbitrary start and expiration date.

Sure, next month they could still be issuing certificates that expire in 23 months and they would be valid. But a year from then they'll only be able to issue certificates that expire in 11 months, and after two years they'll only be able to issue certificates that are already expired.


An arbitrary start and end date means that you could post-date the certificate to be good for two years starting from 1 year ago, even though you created it today.

That's what the parent post is talking about.

[edit: You're right. Even w/ arbitrary dates, end dates crossing the threshold will still not be accepted.]



If only there was a blockchain for this.


There's a growing literature and conversation about the tradeoffs between blockchains and the approach that CT takes. They are actually historically related to one another.

http://www.aaronsw.com/weblog/squarezooko

Ben Laurie (a co-inventor of CT) had some particular objections to blockchains, partly to do with the problem that the total computational power that is or will be brought to bear on mining is unknown and maybe unknowable.

http://www.links.org/files/decentralised-currencies.pdf http://www.links.org/files/distributed-currency.pdf

I'd like to actually make a web page that talks about the historical relationship between blockchains and other decentralized append-only data stores, as well as the points of debate and trade-offs between different designs, and the space of proposals that achieve consensus about the contents of an append-only record.


The CT approach is a lot more efficient assuming a closed network of pre-registered "miners" (logs, in CT parlance). This works ok for CT because CT does not place such a large emphasis on decentralisation, nor does it need to. Ultimately the power in the SSL system lies with browser and OS vendors, in turn that means only a handful of people really. Having a super fancy ultra-decentralised system that will only be used by a handful of OS services and browsers doesn't make a whole lot of sense.


Don't know about this Certificate Transparency thing but Namecoin does allow a certificate fingerprint to be associated with a domain, without using a CA, and it works rather well.


That does work for domains within Namecoin, but it cannot be used for ICANN TLDs. People have suggested pairing schemes to allow domains under other TLDs to be associated with Namecoin records but I have yet to see one that is secure (they all rely on honest miners).



okTurtles is one of several middleware options that runs on top of Namecoin, implementing the domain record spec. It advocates using someone else's trusted server as a data source, but does not at this time implement any integrity protection for data in transit.

http://www.freespeechme.org/ is a more mature and more secure.

Edit: DNSChain does not appear to provide certificate validation for .bit domains in common browsers at this time. It can serve DANE records, but the browsers don't currently verify them and the existing browser extensions for adding DANE support require DNSSEC which DNSChain does not provide, unless I'm missing something.


I think DNSchain offers some compatibility with .bit domains, but the real innovation happens with .dns domains?


FreeSpeechMe is not practical for use because it requires running Namecoin locally.

DNSChain is just as secure as FreeSpeechMe when connected to a trusted DNSChain server.

> It advocates using someone else's trusted server

This is misleading. You and Jeremy keep repeating this line and it honestly feels like a vendetta at this point (I hope it's not). It advocates connecting to your DNSChain server (or a close friend's). This is security that is either on par with FreeSpeechMe (without the baggage), or orders of magnitude better than what's provided by HTTPS today.


Certificate Transparency does not work.

It does not, as it claims, always provide a log of issued certificates and it does not stop MITM attacks:

- http://www.ietf.org/mail-archive/web/trans/current/msg00233....

- http://okturtles.com/#oktvs

It also does nothing to fix the non-functioning certificate revocation problem (which is not a problem with the blockchain).


If more than one signature was allowed on a certificate, then sites would be able to have more than one path to the root. Then it wouldn't matter if any single CA was removed, giving people time to obtain new signatures.


Agreed, but I think it should go a step further. Browsers should require 2 signatures from different regions. That way a government can't pressure a single CA within their borders to issue rogue certs. 2 signatures would be minimum with 3 for redundancy.


Agreed: this is a major operational flaw with deployed PKIX.

It has many unintended consequences, for example it's especially a pain with UEFI Secure Boot.


A compromise that wouldn't hurt innocent websites would be to remove the CA's ability to issue new certificates.

CAs knew that the stakes were high when they signed up to be a CA, so I think it'd be fair to remove them permanently. Eventually, only the truly paranoid would be left. It may cost a bit more to run a fully secure CA, and that would be reflected in the cost of a certificate from those that remain, but that'd be a price that everyone should be willing to pay.


> It may cost a bit more to run a fully secure CA, and that would be reflected in the cost of a certificate from those that remain, but that'd be a price that everyone should be willing to pay.

I don't know, this could really discourage the adoption of HTTPS. We need to be making it easier for sites to adopt it.

Ideally we could lessen our dependence on CAs. Sure, a "fully secure/expensive" CA would validate a certificate's authenticity with high confidence, but we'd be excluding tons of hobby projects and small shops who wouldn't want that expense.


Seems to me a CA that can't avoid issuing bad certificates is a bad thing for security, no matter how cheap they are.


What good does widespread HTTPS use do if you can not trust it?

I'm all for reducing our dependence on CAs, and, reducing the number of CAs we trust for signing a site, like putting the keys in DNSSEC. But until we do that, we must not trust bad apples.


It's always better than plaintext HTTP. Nothing is worse than plaintext.

TLSv1.2 with good ciphersuites using ECDHE with a named curve like secp256r1 [murky origins I know, but we know of nothing else wrong with it, though Curve25519 is superior in my opinion], and an AEAD like AES-128-GCM or CHACHA20-POLY1305, properly implemented with no TLS compression at least comprehensively prevents Eve from spying on the contents of your connection in most cases, even if you're using self-signed certificates with no pinning and are entirely unprotected from Mallory.

A browser shouldn't call it secure or the endpoint trusted, but it should transparently replace all uses of unencrypted HTTP worldwide, including internal ones. That should not be discouraged, and will hopefully be actioned with HTTP/2 - that was the plan, anyway.

Even if the encryption is crap (Export ciphers... RC4?) I guess it does take some more work for Eve, which knocks out a few of the lower-capability adversaries.


Couldn't Mallory just replace the certificate with her own one? The browser would just pop up the same warning. The warning should show some form of a public key fingerprint at least before clicking through "I understand the risk".


I'm not sure it's possible to absolutely determine when a certificate was issued, that doesn't either introduce another third party (i.e. a timestamping service) that the CA can't trivially bypass.


A distributed no-trust-required timestamping service is fundamentally what Bitcoin provides.

So it would be possible to prove a certificate was issued before a certain time (eg. before a breach at a CA), if CAs were forced to immediately publish, in the Bitcoin block chain, the hash of each newly issued certificate. There would be no need to trust a single (bypassable/hackable) entity doing timestamping, since the block chain itself is distributed.

(This is just one of the numerous applications that distributed block chains enable IMHO! And we are just barely scratching the surface of what we can do with such a concept...)


For some applications, yes.

But the bitcoin blockchain is already too big to store and sync on mobile devices with costly data connections - and it would only get bigger if it had to contain every SSL certificate ever issued!

You could have SSL clients connect to trusted nodes, like mobile bitcoin clients do [1], but then you've basically taken Certificate Authorities and replaced them with 'trusted nodes' so you're not much better off really.

[1] https://en.bitcoin.it/wiki/Thin_Client_Security


The bitcoin blockchain doesn't need to contain every SSL certificate. It just needs to contain a SINGLE transaction conducted each day where the comment field contains the hash of a document published by the root authority. That document would be published elsewhere (on the site of the owner of the root certificate) -- the bitcoin blockchain would be used only to provide proof that the document hadn't been changed since it was published.


The problem is that distributed block-chains are not a panacea. It requires a certain inertia to increase the difficulty of attacks on the block-chain, and even then attacks are still possible if your pockets are deep enough.


i assume this would be done by looking at the issuing date of the cert and rejecting certs with issuing date > date CA was 'executed'.

however, a CA could just issue certificates with the issuing date in the past. they probably wouldn't do this for real customers because it would be detected easily. however, they could do it for fraudulent customers and it would be a massive temptation for CAs who have no other source of revenue to do this.


> That way, when a CA fucks up like this, vendors could remove their certificates from the root stores, but could grandfather in all their previous certificates so the CA's customers have a few months to get a certificate from a decent CA.

You cannot grandfather in old certificates because you have no idea what else may have been tampered with. If they have a security hole, it could have been there for a long time. There is almost no way to know if a certificate was improperly issued unless you're the domain owner. All of them have to be invalidated to stop any other potentially invalid certificates.

This is why the CA model is so stupid. It relies on humans at over 650 institutions "doing the right thing" every time, and when the inevitable happens, destroying a large number of innocent certificates.

In terms of sending a message, we already know that you can issue invalid certificates with impunity. Exploit a CA, make your fake cert, use it, get found out eventually, repeat. It doesn't work as well for pinned certificates but there's more than one way to skin that cat.


If there are no consequences then there's no value to the CA system. Why bother with it then?


> Every CA should be required to publish a signed, public list of every certificate they have issued that is currently valid;

A LIST??? This will be much MUCH larger than a list of revocations, take a LONG time to verify, and frankly, would never be up to date.


Moxie Marlinspike gave a talk at DEFCON 19 about how broken the CA model is and suggested an alternative.

The talk - https://www.youtube.com/watch?v=pDmj_xe7EIQ The alternative - http://convergence.io/


Had notary up and running, but was not able to force browser into using one. Unfortunately there is not enough attention from community to this project.


https://www.youtube.com/watch?feature=player_detailpage&v=Z7... <= seems like we should also block GeoTrust - GeoRoot !


Once again, demonstration that the CA model is broken. Why does it make sense for any CA to be able to issue certificates for any domain?


I'd say it's not the CA model itself that's broken; it's the way we use these CAs that is too static to account for inevitable breaches in security.

There is no easy way to add or remove a CA from our trusted sets. There is no way for a server to send multiple signatures to the client, so the client can choose any CA he actually trusts and silently discard the bad CAs.

There is no problem in having any CA issue a certificate for any domain; the problem is that we have to trust these CAs and we can't move fast enough.


Sending multiple signatures and relying on a quorum of CAs might be a great solution.


What would the solution be?


DANE TLSA is probably the best current option.


On that note, it's worth taking a look at https://tools.ietf.org/html/draft-nygren-service-bindings-00 - the "B" record, which is in a very early speculative draft. One shot among several at gluing this stuff together.


A) Have the initial connection give the client all the domain verification info it needs for the future. Now secure connections can be established with or without a CA. Upside: independence, only one time you can exploit the connection. Downside: anyone can mitm the connection forever from the first connect onwards.

B) Use a third party to register the information to secure the initial connection. This is what CAs were created for, and now people are suggesting DNS (and I would wager somebody might recommend whois at some point) as central trusted authorities for this data. Upside: no initial connection mitm, attack surface is minimized compared to number of CAs, distributed model. Downside: still a 3rd-party central authority, and due to smaller attack surface one successful attack could have much wider-ranging consequences, and it adds complication.

C) Decentralized network of 3rd party peers to share authority information. Upside: decentralized, loosely organized, requires a bigger attack surface to disable. Downside: peers not held to as high a security standard, not necessarily businesses so no guarantees of integrity.

D) Require the user get out-of-bounds information to verify the integrity of initial connection, and after that any connection between the source and destination is secured in any number of ways. Upside: total security not dependent on 3rd parties, no man in the middle. Downside: incredibly inconvenient and totally unscalable.

E) All of the above. Support all possible schemes and allow the domain and the users to determine which of them they will use. Layer all different forms of verification so that the number of successful exploits to achieve a fake cert is so complicated that only a few dedicated state actors or elite hackers would spend enough time on it to be successful. Upside: users have choice, domain owners have choice, increased security overall. Downside: browsers have to figure out how to explain to the users what the fuck is going on when they get a warning about a cert now.


The problem is that a CA is issuing two things:

1. An identity.

2. A secure communications certificate.

These do not need to be the same thing or issued by the same entity. I propose to modify the system as follows:

1. Register a domain name using an email address and a PGP key.

2. The registrar verifies that the applicant has the private key by requiring a clicked link in an encrypted mail.

3. A phone number is required for "full trust".

4. Thus with a private PGP key and a phone, trust that the applicant is the applicant has been established. It really doesn't matter who or what that applicant is -- just that the applicant and only that applicant has the required items.

5. The registrar passes the public key and the top domain to DNS servers. DNS validates against the registrar. And trust in the domain as an entity is now established.

6. Secure communications with the domain can now be established through the use of certs issued by the top level domain. And the domain becomes it's own authority.

Any changes to registrar/domain information requires decrypting an email and answering a phone. Expensive and compromised CA's go away - Domain records become secure - Trust becomes decentralized - And domain owners hold all the keys... The only thing anyone else has is the domain holder's public key... SSL itself remains in place as is...

Sure, if you lose your private key - you're screwed -- but that's just owning up being a responsible domain owner.


Basically, TLS over DNSSEC.

Yes, it's better because you now only trust one party [1] what's much better than 600 random companies around the world. But it's not a panacea, that still puts too much power into governments hands, and still relies too much on a third party that can be hacked, bribed, threatened.

I'd still like it better if browsers pinned the key of every site I access, and issued a warning if it changed without the new key being signed with the old.

[1] Ok, a few, because you don't contact ICANN directly when you get a domain.


First off, a phone call is not full trust. A copy of your driver's license, power bills, social security card, etc are closer to full trust.

Second, all you've described is a certificate authority that works inside DNS. Instead of a CA validating your certificate and the client blindly believing whatever the CA tells it, now it's the DNS validating your PGP key and the client blindly believing whatever DNS tells it. It's the same, only now you've got more eggs in one basket to get owned.

So no, any changes to DNS do not require any key from a domain owner. If the client implicitly trusts the DNS it means the DNS can make any change it wants and the client will believe it. This is the whole reason nobody likes the CA model, because the CA can decide at any time to issue a certificate for any domain and any client will just blindly believe it.


You misunderstand.

You're defining trust as trust that the entity is who the entity claims to be in societal space. But this is unnecessary. All that needs to be trusted is that the claimed domain owner is the domain owner. It doesn't matter who that entity is in societal space - just cryptographic space.

Each domain is its own CA. And the registrar doesn't need to be trusted at all. All it is doing is dolling out domain names to public keys. There is no need for "true" personal identity to be established - since the only entity that can decrypt the PGP encrypted verification email is the owner of the associated private key.

I've drawn a very rough diagram for you... http://s26.postimg.org/daz5u1izd/replacement_CA.png


No, I understand PKI. What you've created is less secure than what we have now.

If computer A wants to connect securely to computer B, and it doesn't want its connection to be MITM'd, computer A trusts a 3rd party (computer C) to tell it that computer B is really computer B and not a fake. If you take over the DNS system (which is your version of computer C in my example) you get to write anything to computer A and it will believe it.

The whole idea behind a MITM is that an attacker controls communication between A and B, and ideally also controls communication between A and C. Just controlling the network isn't enough to subvert the connection, though, because an attacker would need to control one of A, B, or C.

If you are the NSA, and you write up a very nice National Security Letter and go to computer C and say "make a fake DNS record for me, because terrorism," you now get to write blank checks for yourself and computers A and B have effectively no idea that their connection is now subject to MITM.

Specifically to exploit your system all you'd need to do is create a fake A record pointing to computer D, and a fake public key record which matches the PGP key and SSL cert of computer D. If you weren't using DNSSEC you could do this on a regular open wireless network with no extra hacking needed. If you are using DNSSEC (and a mandatory validating stub resolver, which Windows doesn't ship with) you'd need to take over the root DNS servers or the registrar which feeds records to the .com. root DNS servers. But the NSA can do all that. Probably others too.

So, again, it's the exact same state we're in with the CAs, with the exception that DNS would be less secure because the client (computer A) doesn't have any public keys/ca certs stored on it to verify the DNS servers or domains against.


In order to create a fake record, you would need to replace the PGP ID on the registrar's record. In order to that, you'd need to intercept and decrypt a PGP encrypted email. In order to do that you need to be in possession of the private key and have the passphrase. An attacker could steal the private key from the domain owner's computer [not the server] and install a keylogger to get the passphrase ... or an attacker could produce a private key from a public key... good luck with that one...

The system I've laid out would be significantly more secure and less spoof-able than the current system. Further, DNSSEC becomes entirely unnecessary...

Also, it is entirely possible to create a registrar which would store all user record data in an encrypted store which could also be encrypted using a domain owner provided public key... if this were added to the architecture, no government entity could modify anything regarding a domain - except replacement of the entire record.

Of course, since the entire system current and any possible future Internet relies upon computers and networks that are explicitly not under the control of the content provider - any government can at any time break the system... This is always going to be true of any and every system of wide networks.

EDIT: Computer A and B need a third entity C to validate A to B and B to A. True... However, this does not need to be a computer -- it could very well be a cryptographically generated unique ID... In my example "C" is the PGP ID. However, it could be any cryptographic item that is tied to the domain record and only modifiable using the private key that generated the unique cryptographic item... for example, it could be a bit coin address.


> Of course, since the entire system current and any possible future Internet relies upon computers and networks that are explicitly not under the control of the content provider - any government can at any time break the system... This is always going to be true of any and every system of wide networks.

This is the entire reason we are trying to get away from CAs! What we have works perfectly fine. The only reason anyone wants to get away from it is to prevent state actors from overriding a CA's authority. If you want to just reinvent the wheel with the DNS system, use RFC 6698.


My idea was a thought project, and I was not aware of RFC 6698. Thanks for the reading material, it sounds similar to my thought process...

However, my personal reasons to discard the current CA system is to enable secure communications from multiple subdomains without the need to pay a $500 rental fee for a wildcard identity+cert --- As well as enable secure passwordless access to A/MX records without fear of compromise.

The government is going to be able to break any system that's put out there. At the very least, a government can disrupt IP traffic in and out of a node. There is always going to be an open addressing scheme, unless the network is an encrypted peer-to-peer network with every node attempting to decrypt every packet... I seem to remember reading about a block chain peer-to-peer data network being developed. However, my guess is that overhead bandwidth limitations will be a problem for large networks.


Convergence by Moxie Marlinspike. Some argue that it does worse then CA in protecting against low-profile phishing attacks, but that's just bollocks.


A distributed secure key-value system a la Namecoin.


You have to store the blockchain on your computer to be able to browse website securely via namecoin, if you use 'light wallets' you will end up with the same DNS as today (trust a third party DNS server + CA or Namecoin node). You could use http://convergence.io/ type of idea to connect to two trusted Namecoin nodes but do we really expect the average user to keep a list of trusted nodes?


If you only want to be able to resolve against Namecoin, it's possible to build a client that stores only the unspent transaction output set for names and a few dozen recent blocks and full security.


How does that scale in filesize if Namecoin had 215 million~ domain names like we have in the current ICANN system? Also, what would the solution be for smartphones, downloading even just a few blocks isn't preferable with phone networks.


You can have trustless namecoin without storing the blockchain, using techniques like SPV.


According to https://en.bitcoin.it/wiki/Thin_Client_Security SPV relies on the trust of your ISP. When nodes send you the blockchain data, is that an encrypted transfer or open to MITM?



okTurtles is one of several middleware options that runs on top of Namecoin, implementing the domain record spec. It advocates using someone else's trusted server as a data source, but does not at this time implement any integrity protection for data in transit.

http://www.freespeechme.org/ is a more mature and more secure.


> It advocates using someone else's trusted server as a data source

Wrong; it advocates using your own server, but proposes public servers for tests. Here's an excerpt of the README:

    DNSChain is meant to be run by individuals!

    Yes, you can use a public DNSChain server, but it's far  
    better to use your own because it gives you more 
    privacy, makes you more resistant to censorship, and 
    provides you with a stronger guarantee that the 
    responses you get haven't been tampered with by a 
    malicious server.


I must have either misremembered or it has been changed since I last looked, then.

It is still the case that using DNSChain will not get you validated SSL for .bit domains without additional software. It can serve DANE records, but the browser won't verify them and the existing browser extensions for adding DANE support require DNSSEC which DNSChain does not provide.


> It is still the case that using DNSChain will not get you validated SSL for .bit domains without additional software.

Yes, just like FreeSpeechMe it the SSL validation will initially work via a browser extension, but unlike FSM (heh) it does not carry the additional baggage of requiring you to run a Namecoin node on your [phone/laptop/etc.].


DNS Dane, SSL singing with 2 CA's, peer-based trust, SSL pinning. We probably need a mix ... and none seems to be ideal.

What would help is be more ruthless when CA's mess up, and no the pussy approach we are using now: do nothing.


Exactly. It does not make any freakin sense, yet people are banging on implementation details like which cypher is better like you could not circumvent the entire process with a cert signed by a compromised CA.


[deleted]


Tie it to the DNS. Globally, there is one root CA which is authorised to only issue certificates for ?. Then, each top-level registry (DENIC, Nominet etc.) get a certificate from this CA which allows them to issue certificates on ?.tld.

This is quite similar to DANE mentioned in the sibling, except that it doesn’t directly tie into the DNS and instead just merges the two organisations. Vendors could then either ship the root CA and/or each TLD certificate. Even when one NIC gets compromised, all it can do then is issue wrongful certificates for that given TLD (and rogue/untrusted ones are restricted to their namespace). There is still a SPOF in the “root” CA issuing TLD certs, but at least there is only one instead of 100 or so in the form of all root CAs plus the hundreds of intermediate CAs which, in theory, can issue certs for any domain just as well at the moment. Since it’s been a while that I heard of a case where TLD nameservers got wrongfully replaced in the root zone, this seems to be reasonable safe organisation.


There's a fundamental flaw with the "CA/DNS/HTTPS/web browser UI" infrastructure that conflated the concepts of "encrypted transmission" and "proof of identity". Despite the scary red screens browsers like to show it's never the case that you're less secure using https to talk to a given server, even if the certificate the server presents is expired, mislabeled, or even entirely forged. Regardless of the encryption there are many ways to fool you into believing the server you're talking to belongs to some other entity, CA's were supposed to address that but they are consistently failing at this both by accident and indifference as well as hacking. These concerns need to be separated. I believe the end game is for servers to generate their own certificates for encrypted communication, and ultimately it will have to be some extension to DNS to handle the proof of identity, but I don't see an obvious evolution from the existing CA system.


> There's a fundamental flaw ... that conflated the concepts of "encrypted transmission" and "proof of identity".

You can not separate those concepts. If you don't have proof of identity, your connection is not secure.

>Despite the scary red screens browsers like to show it's never the case that you're less secure using https to talk to a given server, even if the certificate the server presents is expired, mislabeled, or even entirely forged.

But I completely agree with that, there should be warnings for plain HTTP, with the same severity used for self-signed sites. (And current browsers are too severe with self-signed certs, to the point that they reduce the security of people accessing those sites.)


This.

The ux [especially] places priority of identity ahead of protected, private conversation and then too broadly encapsulates that information. Secondly, verification is too static, too binary (all or nothing), and too optimistic.

1. To what degree is the line leaky and observable? 2. To what degree are we confirming the conversation participants' identity claims? 3. To what degree is the conversion following normal conversation protocol?


Another alternative would be to restrict which TLDs a CA can issue for - e.g. China's CA would only be able to issue .chn domains etc.


Name constraints apply all the way down the chain - intermediates are implicitly limited by the constraints on their root.

CAs have been name constrained in the past, see the update here about ANSSI: http://googleonlinesecurity.blogspot.com/2013/12/further-imp...


[deleted]


A Chrome update was needed because the constraint was applied retrospectively. A name constraint is usually an X.509 extension that is included in the certificate.


[deleted]


> Every time you visit a page the OS/browser doesn't go up the entire chain of trust and check for constraints.

Oh, certainly they do. Name constraints work just like that: https://tools.ietf.org/html/rfc5280#section-4.2.1.10

So does Extended Key Usage in practice, although it's not defined that way.

There are some platforms where name constraints aren't implemented, but CAPI (Windows) certainly does implement it and I believe that NSS does also.


This event also highlights, again, that our Certificate Transparency project is critical for protecting the security of certificates in the future.

No! Certificate Transparency still relays on central authorities. We need to get rid of CAs. TACK + Convergence is the correct solution.


I've never heard of TACK before.

Are these what you are referring to? [1] [2]

[1]: http://tack.io/ [2]: http://convergence.io/


Yes


I agree decentralization is desirable but haven't studied all of the proposals in depth. Over at http://www.certificate-transparency.org/comparison Google claims:

...that their CT approach is superior to TACK because (T1) Servers can instantly roll out a new key if the previous one is lost (T2) Global (ie. whole internet sees evil server instead of good server) and targeted attack detection is superior (T3) There are no trusted third parties (T4) Newly issued keys on totally new sites can also be validated (to a greater extent) (T5) No server modification (ie. to deliver pinning headers) is required.

... that their CT approach is superior to Convergence because (C1) It is not known to introduce side-channel attacks due to changes in the SSL connection negotiation phase (C2) Servers can instantly roll out a new key if the previous one is lost (C3) Global attacks (ie. whole internet sees evil server instead of good server) are negated (C4) There are no trusted third parties (C5) Newly issued keys on totally new sites can also be validated (C6) No server modification (ie. to deliver pinning headers) is required.

Can anyone refute these claims?


Alexandra C. Grant wrote a paper comparing different methods of improving the current CA system: http://www.cs.dartmouth.edu/reports/TR2012-716.pdf

But unfortunately she does not take TACK/pinning + Convergence in consideration.


I wrote an article discussing many CA alternatives which also includes Convergence, here:

https://medium.com/bitcoin-security-functionality/b64cf5912a...

Convergence IMHO does not work. The UI is poor and fundamentally it's just the CA model with very short lived constantly renewed certificates. There's no particular reason to believe it'd work better than the existing PKI for ordinary users.


> it's just the CA model with very short lived constantly renewed certificates

Very strange conclusion. Convergence have following properties CA model does not have:

* trust is optional (you don't have to trust Iranian CAs) * trust is revocable (you can safely remove trust from any notary) * trust is distributed (you trust only if all notaries are acting as one; as opposing to "you trust anything any of CAs will say")

Notaries are not signing anything, they are not CAs. Also, there is nothing like "short lived constantly renewed certificates" in this model. Hosts are using self-signed certs (or CA signed - does not matter). Notaries are functioning in "attacker will not MiTM whole Internet" model and only help you detecting if something went wrong.

If anything, convergence is a combination of TOFU and WoT models. Although an attempt to describe a security model by such comparisons does not help much.


> Convergence IMHO does not work. The UI is poor

There is no need for a UI in Convergence that I know of? What part are you talking about?

> and fundamentally it's just the CA model with very short lived constantly renewed certificates.

I don't understand what you are referring to. There are no "short lived constantly renewed certificates" in Convergence.

Maybe you mean something else


OK.

That's interesting coming from the Bitcoin angle as I've seen Trezor present before and personally opposed Gavin's stance on both SSL use and the general scope increase in Bitcoin's Payments/Receipts discussions. Deaf ears.


Reading the comparison table you linked, there is a misleading point, but I may understand it wrong: they claim that CT "does [...] avoid the need for the client to trust a third party", but in reality the client still needs to trust CAs, and have the CA keys in his trust store. Removing a CA key from the trust store of the browser basically invalidates all certs signed by that CA (just like it is now). It's just a framework for monitoring the released certificates. Trust on the CA still needs to be there. And as Moxie sad: "Whenever someone is proposing an other authenticity system, the question that we should all ask is: who do we have to trust, and for how long? And if the answer is: a prescribed set of people, forever, proceed with caution."

With CT we still have to trust CAs. Basically forever.


The CCA is so aware of its own vulnerability, it refrains from the use of SSL on its own page http://cca.gov.in/cca/index.php - no https here :)


Indian govt websites have a pathetic sense of security. One prominent consumer facing website with logins and company data has a certificate issued to "Mohan Babu" (equivalent to John Doe) and expired 5 years back! I guess that's somewhat better than the other Indian govt websites that have no SSL at all!


> At this time, India CCA is still investigating this incident. This event also highlights, again, that our Certificate Transparency project is critical for protecting the security of certificates in the future.

What is there to investigate? If they had a proper system in place this should not require 'investigation'.

While I embrace the global infrastructure, it's a bit weird to give authority rights within a country that has a pretty broken legal system (re: Avnish Bajaj, etc).


Presumably if they got hacked, they want to know how they got hacked. Perhaps there's a new zero day on the loose. It always makes sense to investigate these things.


It's because of incidents like this why I call our PKI a scam and a racket. The fact that this is even a thing that can ever happen points to massive, systemic problems in the trust model.


No, it doesn't.

You appear to believe that any security system that has any failure, ever, indicates "massive systemic problems". I assert that there are no security systems in history that would meet such a standard.

There are an enormous number of sites and certificates out there. Studies have been carried out at scale on attacks on SSL and found that most MITM attacks come from locally installed virus scanners, malware, or company firewalls. Hacked CA's didn't even register. So if you represented the number of bad certs as a percentage it'd probably have a lot of zeros after the decimal point.

That doesn't mean the world should sit on its hands. Although real world studies have been done, requiring all certs to be public will be a massive upgrade.


> a scam and a racket

Can you elaborate? I've heard a lot of people say it's broken or badly designed, but not that it's malicious or intentionally broken.


Well, it's broken and there are people getting lots of money due to the fact that it's broken.

Almost certainly the creation of the standard was not malicious, and almost certainly it currently gets support of people acting with malice. But I don't have anybody to point a finger at, even the most logical suspects aren't overtly trying to keep it broken.


Maybe we need a browser add on that warns us when a shady/incompetent CA has signed the certificate of the current site we are on? As it sits today there is no repercussion for these terrible CAs that screw up like this.


If you don't trust the CA, you could just remove it from the root store of your OS or browser.


True, but that is a manual process that most users don't know how to carry out. We need to make it easy for the masses to "punish" the terrible CAs. If you can put pressure on the bad CAs, they will at least try to get better.


Understanding SSL is already really hard. What is a user supposed to do with a warning about a valid cert from a questionable CA? The site is probably fine so you're mostly just teaching them to ignore SSL warning messages.


The idea is to cause at least a small percentage of users to distrust the cert/CA. This would cause the sites who buy CAs to avoid going with the shady CAs because of the user complaints they got after browser warnings were shown.


The masses have no interest in fiddling with their browser settings. Ultimately the decision makers here are the browser makers (i.e. guys like agl).


The CA system is broken, so is BGP with routes being essentially hijacked by the word of mouth protocol. Wonder what the fixes or a reboot of the internet would look like.



It's a scary thought that this probably has been going on mostly undetected for over a decade before Chrome added cert pinning.


I wonder if having your registrar be the only one able to issue you a cert for your domain would solve this. That way the user can verify that the cert was not only signed by a trusted CA but by a trusted CA for this specific domain.


The registrars would love this ;)

DANE and DNSSEC feel to me like the only currently proposed replacement for the CA system that has a chance of succeeding, not necessarily because of technical superiority, but because of practicality and simply being "good enough".


It's technically superior.

Instead of trusting 600 CAs, with DNAE you only trust the TLD, second level if existent, and registrar. It's an incredibly smaller attack surface. You can also register in a second TLD, inserting redundancy into any system that knows your address beforehand.


I wonder if we can map every intermediate?

Obviously Certificate Transparency (or any public audit log to some extent, really) helps a bunch with this sort of thing.


So when does the CA death penalty occur?


Never. Once CA, forever CA. Can't remove particular CA on local level (too hard for users), can't remove particular CA globally - because all those legitimate certs signed by that CA.


DigiNotar went bankrupt and had their cert removed from browsers after they tried to cover up a hack: http://en.wikipedia.org/wiki/DigiNotar


I wonder what those other domains were and why Google didn't pin them. Is it costly to pin a domain?


The Cathedral isn't finished and it's crumbling already.

Back to the Bazaar!


> The India CCA certificates are included in the Microsoft Root Store and thus are trusted by the vast majority of programs running on Windows, including Internet Explorer and Chrome.

Jesus Christ, the CA system is so broken.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: