Every time I bring up the following point someone chimes in that it's a bad idea, but I still fail to understand why it's not easy to pick which CAs I want to trust by picking a list of entities/people I trust and then adopting their recommendations for which CAs to trust.
This would be a few clicks of UI to let me be intelligently paranoid while maintaining only a layperson's understanding of why (say) Bruce Schneier decides to trust some and not others.
Or, as you said, being able to subscribe to other recommendations would be cool.
China, Iran, etc. don't want direct attribution.
To my knowledge, Italy has never been part of the Eastern Block.
EDIT: Looks like you can - https://stackoverflow.com/questions/2402121/within-a-web-bro...
The other part would be creating a way/file-format for experts to provide information about which CAs they trust and which are suspicious, then let the extension consume those.
It's not as good as built-in browser support, but it's a heck of a lot faster and more do-able.
just like microsft didn't want windows users to be able set a different default browser...
Nothing is more embarrassing than getting caught with your fingers in the cookie jar.
Besides I trust CAs will be de-listed if proven compromised.
The attacker doesn't even need to compromise a CA.
If someone hijacks the IP address of example.com, he could easily get a valid Let's Encrypt certificate for that domain.
If you're a larger outfit you should pick a trustworthy CA vendor or (two independent vendors according to your risk management profile), and lock CAA to those trusted vendors. You can then agree whatever terms suit your business on top of the Ten Blessed Methods, so as to avoid bad guys stealing your names. For example Facebook has an agreement with their chosen CA that all facebook.com and fb.com issuances get signed off by Facebook's network security people. No "But I'm the head of Asian marketing! I need this immediately" bullshit, it gets a sign off from netsec or it doesn't get issued.
Unfortunately, the mechanism which replaced it (Expect-CT) can only help after the fact - someone would have to notice the extra issuance, and you would have no idea whether your traffic was affected.
Anyway, redirecting and sniffing traffic is one thing, intercepting and changing encrypted traffic while being undetected is another. It’s quite a stretch really.
Not because I don't trust it, but I might because Bruce doesn't. At least, if Bruce stops trusting it I'd like to know why before I decide to trust it again.
I think if you imagine the way that compromised CAs would be used in actual attacks it is clear why we can't expect compromised CAs to be widely observed (they would be used only for targeted traffic), and why we should not trust all CAs equally.
For instance say you want to go to example.com, and it says “you need to trust X CA”. Ignoring that a user doesn’t know what that even means, all of history demonstrates that a user’s goal when encountering a barrier in the way of something they want is to get rid of the barrier. Arguing for user education isnt the right response because technology needs to be designed to work with humans, not the other way round.
Instead we defer to groups like the various CARB orgs to ensure that the trust stores contain only robust CAs. Historically there have been few teeth as evicting CAs from the truststores is hard - look at how long the Symantec distrust is taking, and look at how many people (incl. HN commenters) have argued against it.
That said the addition of things like CT logs has finally given trust stores a view into what is being issued, so can finally detect poorly run CAs at the time that screw up, rather than maybe catching them months later, if at all. That then provides the evidence needed to justify distrusting a ca or auditor.
That is all a large amount of exceedingly technical work, that no regular user could hope to grasp and make a reasonable choice about.
Instead the market pressure is as it should be for CAs now: ensure you are following the rules, or risk distrust. That is pretty much the most effective market pressure you can have in the CA market.
having to manually select/add CAs is akin to not having ssl and using pgp everywhere. and you know how well that works out even for technical folks.
I realize the desire to control my own security is thought by browser-makers to be the death of ecommerce.
I still want it.
What happens when some 20 year old gets their first computer, it doesn’t have any CAs in the trust store so how do they find the good ones? How do they know what the good ones are?
You think you want it, but you don’t necessarily know which CAs are good - have you read all the audits (which you can only trust if you got over tls, which requires you trust it).
What happens when you get a site saying “you need to trust ‘Google CA 1’/‘LetsEncrypt’” (or whatever their public CA is). I’m joking of course - the name is meaningless - there’s no root of trust to verify that it’s not someone making a certificate that just claims to be that CA. So you actually need to know the public key for each CA that you think is trustworthy.
Actual experts aren't respected by the general population enough for this to be a net positive change is my guess.
Basic security mechanisms when it comes to large ISP networks are a pipe dream though, instead we get vendors pushing extremely vulnerable Juniper gear cause its reasonably priced, meanwhile these boxes have new root exploits found multiple times a year. None of the vendors give a crap about security, Cisco pays it some lip service (to win gov't contracts) but charges a premium for basic features.
Today you'd do better, with hindsight being 20/20.
CAs aren't mandated to log certificates for you (and indeed some offer the possibility to deliberately not do so for reasons I'll get to in a minute) but if you run a mass-market CA logging certs by default is the only possible way to remain in business since otherwise your entire customer service budget will be spent explaining to customers how to log the certificates and make them work with Chrome.
Firefox and Safari have announced plans to require SCTs but without a specific version or timestamp deadline. Apple's language says "calendar year 2018" but that's probably ambitious. It scarcely matters, Chrome is already too many users for a commercial CA to ignore.
So, why aren't all CAs logging every certificate and baking the SCTs into the final certificate? Well, when a certificate is logged that makes it public, but power users may want the ability to sidestep that. For private systems they may just have decided to never run Chrome (and good luck to them in the future when IE6 on Windows XP is the only option left that doesn't check CT). But for public systems if you're technically capable you can get yourself unlogged certificates, then at launch time log them, collect the SCTs and deliver those to the TLS client rather than baking them into the certificate. Google does this, a few branding practitioners do it. It's very important to get it exactly right because if you screw up your certificates are worthless until you fix it. But if protecting naming is important to your business it's an option.
Under current policy this "nothing" means Google plus at least one independent log operator claim to have seen it and logged it. This eliminates the scenario in which a powerful adversary obtains certificates but only shows them to a single victim or small victim group. Whatever they did, everybody will see it.
But sure, it won't happen overnight.. just saying gaps are closing :)
Try this URL: https://invalid-expected-sct.badssl.com/
If you visit that in Chrome it gives you a full page interstitial warning it's bogus and if you click past the page is labelled "Not Secure".
In other popular browsers it works fine, because it has a perfectly nice certificate but the Bad SSL site is deliberately not presenting the SCTs for it. [[ It's hard to do this by accident, most places that give lay folk a certificate will assume your goal is to have your certificate accepted, so they will log a "pre-certificate" for you and bake the SCTs inside the certificate they give you and you can't remove those ]]
But yes, fully completing Certificate Transparency will be more work, we need a Gossip system so that monitors can consult each other to detect a split horizon, and mechanisms for clients to show summaries of what they know to determine if there are conflicts.
What we have now is like if you have a house you've half-built, there is no roof over two rooms, and no electricity, and the floor is bare dirt. But, it's still a house, and in a rain storm it's better to be inside that unfinished house than out in the cold and wet. The people outside in the rain don't think "That guy's house doesn't have triple-glazed windows" they think "Lucky bastard isn't out in the rain like me".
And so begins the ever greater attacker regression.
Sure, it's better than nothing but RPKI doesn't validate the entire path, only the origin.
If you want path validation, you need BGPSec. But BGPSec is basically un-deployable in real-world networks.
Your comment simultaneously contains almost no information is super off topic.
I've found China Unicom openly hawking their data mining products. https://imgur.com/a/uNxA50K
Go ahead, monkey around with BGP, since I have the public key of the recipient of my packets I can detect this and block any type of misdirection.
And how did you get that public key?
An attacker could pretty easily obtain a valid Let's Encrypt certificate using a BGP hijack.
Also, the CA system is in bad shape - CAs have been hacked and certificates were leaked. Not to mention that some of the CAs your browser trusts are not entirely trustworthy or are located in untrustworthy countries. Oh, and from time to time there are attacks against TLS itself (e.g. https://drownattack.com/)
The other issues (trust worthiness of CAs in countries that have the ability to compel a ca to issue a fake cert -Australia say), are intended to be mitigated by the CT logging that is now required by the major trust stores. Sure your Aussie CA might issue a fake certificate, but in doing so they ensure they get a global distrust...
Without this, we will always be dragging our feet in dropping CA trust, because it will leave some perfectly valid sites shit out of luck.
Whoah, I have never realized this.
Is there some way to include some key in the DNS entry or something to mitigate IP hijacking?
Does HSTS protect against this?
More info: https://secure-certificates.princeton.edu/
CT allows you to detect misissuance - theoretically you could have a monitor service that watched all the logs for changes to your domains.
Longer term something (no opinion stated on exactly what) needs to be done to rectify the trust model for BGP and DNS
Then we can delist compromised CAs, yay :)
(Sure, it'll take time, but gaps seems to be closing on so many layers)
At the top of the screenshot it says "traceroute from London to ..." - no traceroute program knows where it is in the world!
Also the locations of each hop in traceroute NY > Chicago > Ashburn etc., no traceroute program will know where in the world those IPs are. I suspect the author has guestimated based on the reverse DNS record for the IPs and latency.
Traceroute does have the ability to show you the ASNs in a path but that is based on a WHOIS lookup of the IPs that it's discovering. So it could be wrong by assuming the IP address of each hop was announce by the ASN that owns it.
traceroute news.ycombinator.com | grep -f chinese-ipv4 -f chinese-hosts
See also: https://news.ycombinator.com/item?id=5192656
Edit: ha! ironically, Oracle site about china spying on you won't load the content unless you allow google analytics code to run. If google analytic code fail, the rest of their code also fails.