Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft quietly pushes 18 new trusted root certificates (hexatomium.github.io)
286 points by svenfaw on June 27, 2015 | hide | past | web | favorite | 136 comments

I'm glad someone noticed, and at the same time it's a shame it took a month before the news actually came out.

I think this demonstrates 2 very major problems with SSL Certificates we have today:

1. Nobody checks which root certificates are currently trusted on your machine(s).

2. Our software vendors can push new Root Certificates in automated updates without anyone knowing about it.

More content on this rant: https://ma.ttias.be/the-broken-state-of-trust-in-root-certif...

Root certificates are just one manifestation of a much larger problem, which could be described as follows:

1. Nobody checks the source code of the software installed on their machines (or that the binaries were actually built from the purported source code).

2. Our software vendors can push new software in automated updates without anyone knowing about it.

Suppose that the CA system were replaced with a PGP-style web-of-trust or SSH-style trust-on-first-use system. How would you know that your software vendor has supplied you with a TLS implementation that's correctly and faithfully implemented? Just as they can push out new root CAs, they can push out a new TLS implementation that introduces a bug or backdoor that bypasses all trust checks.

So why single out root certificates?

>So why single out root certificates?

Because you can only solve one problem at a time.

If you use Linux, both issues are pretty much sorted. Repositories provide a great way to install trusted software, if you're careful about which repositories you install on your system.

Well, the issue of "nobody checks the source" is not necessarily solved. Just because anyone can check the source doesn't mean anyone actually does. Additionally, just because a repository has been historically trustworthy never means it can't be corrupted.

There are real people behind all levels of these projects, and people are vulnerable to corruption, coercion, etc.

True. But with signed source (if you can trust the verifier... turtles all the way down) and projects for reproducible builds, like[1] -- the job of checking can be distributed, and you can trust several different parties to do part of the work.

Now, I still think it makes sense to single out the opaque-ness of CA trust. The Raison-de-etre for the whole CA system is trust, more specifically delegating trust. It provides nothing else.

Which is why it's so ridiculous that by default the browsers will trust the Chinese and German governments to vouch for Swiss banks, American companies to vouch for wikileaks -- even if the Swiss banks or Wikileaks don't trust those entities.

[1] https://wiki.debian.org/ReproducibleBuilds

If you don't want to trust the Debian maintainers, you don't have to. Trusting them, however, is a valid decision for many people.

Exactly. I'll add that we keep seeing obvious vulnerabilities in open source code that go back years. Hence my essay on different types of Source Sharing and Review that shows the actual assurance is in the review not source sharing agreement. Hard to assess how much review any given FOSS got and obviously harder for most proprietary. Sometimes impossible for both.


Tangent: is there an easy way to find/download all the comments you have posted to Schneier's blog? They would be more accessible in a github Jekyll repo, or any source format which could be converted to a single PDF for offline reading.

Haven't gotten around to doing that since I've been so busy past few years and... the posts added up to a lot of work. Call it my technical debt. ;) Hadn't thought of PDF. Nice idea. Right now I have two things: textfile with links to many key posts there with headings; a zip file with a good chunk of those in local text copies. I've always used text because it's the safest and most portable format (except on modern devices lol).

I can give you a pastebin with the links or a link to the zip with containing the subset I've downloaded so far. Whatever you prefer. Eventually I'll have one link to an actual web site haha.

Zip of local text copies would be great, thanks!

Below is the link to the zip. I included the two indexes that have the links. Most designs are local already but the essays folder has only a tiny amount of my essays. So, look at the essays txt file because there's quite a few great ones (including OPSEC) that I haven't transcribed yet. Download it quick because I take it down in a few days.


Got it, thanks very much for uploading. I discovered some of your security essays a few months ago but couldn't find a way to search by poster name on the Schneier blog.

Ive posted too much for that to even help lol. But ill share some Google Fu to help you in future:

site:schneier.com "nick p"

First limits to domain. Using quotes limits to results containing that phrase. A -name removes results containing the name. Can use multiple quoted and minus terms. Main two tricks I use to get good results.

so the trust is in Debian or whoever's running your repos instead of MSFT. I don't see how that really changes the dynamic a lot.

The problem with most people's complaints and solutions about certificate authorities is that pretty much every proposed solution, when applied at scale, becomes functionally equivalent to the web of trust system we currently have. At one point you have to trust _somebody_.

"2. Our software vendors can push new software in automated updates..."

Question: Why does the software need to be updated?

Vendor: "Bug fixes."

Question: Can I look at the source code?

Vendor: No.

I agree with agwa that this phenomenon of "automatic updates" goes well beyond root certificates and browser authors such as Microsoft. It is pervasive and seems to be growing.

Perhaps a related line of thought, I find it interesting that we are seeing some again pushing for enabling browsers to have more control over the user's computer. Simply put, as I see it, a user visits a webpage and someone else gets to run their code on the user's computer.

Why stop there? Why not have email attachments that open and execute automatically?

Originally there was the "Java applet" idea in the early days of the www. Then there was Adobe Flash. These days the idea it has several different working names. Obviously neither Java nor Flash is the language of choice. And the browser authors have something to try to put users' minds at ease: "sandboxing". But I see no difference in the issues this raises for users.

Is there any "sandboxing" for the browser itself? The browser and its authors are inherently trusted?

In the same way that root certificates installed on users' computers are "pre-approved"? Who approved them? Are users involved in that approval process?

If the certificate/code in question comes from the browser authors then it must be both necessary for and desired by each and every user?

The problem with automated updates from my perspective is that there are very few software authors who will not try to give me more than I actually need; if I am not careful I end up with the "kitchen sink". Without user intervention, software is like a gas: it will expand to fill space.

As for the CA system, I am my own CA root. The openssl binary is the antithesis of the so-called UNIX philosophy. How many things does it do? Perfect for an example program to include for "testing". This goes to agwa's comment. I use this software but I know it is quality control disaster. Keep those updates coming.

In any event, the only certificates I "trust" are ones which I did not obtain over an untrustworthy network, i.e., the internet. That number is of course zero.

I will sign a certificate to satisfy a browser, but that does not mean I "trust" any part of the process. In practice, outside of organizational use, I see the whole CA scheme as a joke. (But not that its implementation has impeded the success of e-commerce.)

As a user, I play along with SSL/TLS and certificates only to get today's software to work. That's it. A nuisance more than anything else.

I agree with most of your points. One interesting thing with the new "office macro web" is that the sandboxing of js apps/domains is a little better/less convenient for the user than traditional file systems.

If I install trollololz image messaging app in by ~/bin -- I've given the author access to all of my personal files -- because they're all readable by the user under which I'd run trolloloz.

If I go to trollollz.com, it doesn't automatically have access to my gmail/outlook.com mail, google spreadsheets or dropbox files.

Of course this is a usability problem, so now users move data to data-services like dropbox, and give apps/websites access to that data...

We might have an opportunity here to give users an UX that works for more sensible compartmentalization of data/working ACL/capabilites -- but time is running out, as convenience chases security out of people's lives.

I trust Mozilla more than Microsoft. Is there any way to simply purge all Windows certs and import everything from cert8.db?

I don't think we can really trust anyone more than anyone else any more. I think it would be trivial for any three letter agency to insert, blackmail and/or buy employees anywhere they want.

In an open source store with an open process like Mozilla's someone's bound to notice.

(actually, I'm not sure if Mozilla even can push certs without an explicit update)

Mozilla and its foundation being US based, they can be the target of a gag order, making them liable if they disclose/talk about/hint of "fake" root cert added for the sake of an agency.

Once you start playing with gag orders, secret courts and whatnot, all kind of fun stuff become possible.

Alleged liability. Many seem to think that those actions of the US government are not legally allowed. Most likely there is no real liability for not following a gag order as speech is pretty unambiguously protected from being regulated or circumscribed by the US government.

Except corrupting the source code from which packages are built. At least without anyone outside noticing because the code is public and I bet foreign intelligence agencies that do not trust Microsoft to make IE secure for them are monitoring the change stream.

Why do you think it would be trivial for three-letter agencies to do those things?

Is there a legal mechanism, authority, or track record for such a thing?

If you're talking about Dual_EC_DRBG, that was a non-trivial, poorly-kept secret that failed on launch. An alleged $10 million secret deal, plus development of the algorithm doesn't sound trivial to me.

> Is there a legal mechanism, authority, or track record for such a thing?

The problem is, as a layman, I cannot know. I wouldn't have thought that something like FISA court orders was possible, where you get a secret order from a semi-secret court and you are not even allowed to talk about it.

Who knows, maybe there is a secret FOOBAR law that says agents can force any certificate agency to sign random certificates for them. Maybe some wierd agency you never heard of forced every major manufacturer to include hardware backdoors, and lie about it.

A few years ago I wouldn't have thought that was possible. But my trust that the legal system is democratic and transparent has been thoroughly undermined.

Now, if you run a business and some people in suits come and order you to install a backdoor, and threaten you, and tell you you can't talk about the incident to anybody besides your laywer, you can't do anything about it - and you better hope that that lawyer is good, since otherwise you have no way of telling whether that order is legitimate or not. Those people might as well be criminals, and you have almost no way to find out. Back in the pre-9/11 world, if you didn't recognize the IDs of the, say, FCK agency, you would have phoned around a bit and then told them to f'ck off after hearing their outlandish demands. Because there is no way something like that would happen in our democratic country. You can't assume that anymore nowadays.

A simple "no, I don't actually know" would have sufficed.

I'm not asking you what episode of Blacklist you enjoyed the most, I'm asking you about real life. The lavabit company was served a real warrant made by a real judge, served by a real officer of the court. NSLs are served by real FBI agents with real badges. I'm not debating the anti-liberty essence of an NSL, but the "men in black" fear is completely unfounded by the domain of things that are in the public eye.

The strawman of the "FCK agency" agents ordering people with the threat of jail time to put backdoors in their software isn't backed up by any credible fact. We can suppose and assume all day, but you shouldn't take it for granted that everyone agrees or should agree with you.

*the alleged RSA backdoor was reported by Reuters to be a $10M bribe, doesn't sound coercive to me. Not to mention the NSA has no arrest powers outside its facilities, but sure.

> to anybody besides your laywer

I think that Snowden's email provider was not even allow to talk about things with their lawyers at some point in time...

I think jahnu was referring to obtaining the private keys for a trusted signing authority, which would enable said agency to create valid-looking certificates for the purpose of MITM. Weak algorithms are also concerning, but not really the subject of OP.

You have heard the news since Snowden whistle-blew right?

A NSL could set up the gag order, and it could be part of the whole "need to listen to data coming in from abroad" parts of the Patriot Act (though I think USA FREEDOM removed some of that?)

I think that even in the current state of things there's not much standing for the NSA to force that to happen.

Making a utility for that would not be too hard. But it would probably kill things like the Windows Update process. The Windows certificate store is system wide, and is used for other things besides http.

The Microsoft Root CA cert is not included in the NSS trust store. This would break Windows Update (something that Samsung has recently been accused of breaking). So you have to trust the Microsoft CA Root Cert; And if you trust that, you trust they won' sign a SubCA cert, which they could do.

If you don't trust your trust provider, don't use their software?

But there are different degrees of paranoia. I'm fine(ish) with trusting Microsoft to sign drivers, updates and applications.

I'm also fine with them signing for outlook.com, microsoft.com etc.

I'm not fine with them signing for wikileaks -- but I also am not really worried about that. I'm worried that some fly-by-night CA will loose their keys, get hacked, etc. So I don't want any more than a minimum of CAs on my system, and I'd like to approve them on a domain-by-domain basis.

Even with good UX, that'd bee way more hassle than most people want -- I know that. But it would've been nice to have a sane option for it.

And also some special control over updates/upgrades to the CA-cert store.

In short, I trust Microsoft to write software, I don't trust them to delegate trust, because they're trapped in the CA racket.

How can you trust wikileaks on a non-MS CA if you are viewing wikileaks on MS software?

CA protects nothing from the underlying application, operating system, and device drivers.

I trust MS software. I don't trust MS to select which CA to trust. Sure, MS could backdoor my os/browser. I don't think that's very likely.

If MS force me to trust, say 800 CA certs, and all of them can mitm wikileaks, the likelihood that one of them could be penetrated by a hacker or a state actor is much higher.

Sure, it's not "absolute security", but nothing is.

There are different concept that one trusts, or not trust:

The os, drivers, bios, hardware. I generally trust that. It may be naive, but I do. However, even if I'm right in trusting that, that doesn't matter if I can't trust all the CAs. Not just from the point of malicious actions by the CAs, but from incompetence by them.

The sensible assumption is that os, browser, drivers, bios, hardware; everything is backdoored. But, and it's a very important but... But, they wont use those backdoors on you. Because everytime they use the backdoors they risk detection, and you are not a sufficiently high value target. The guy looking at your case is not cleared to know about them.

No, the paranoid assumption is that everything is backdoored. The reasonable assumption is that everything is flawed, and that for some of those flaws, certain groups have exploits.

It's bordering on crazy to assume that Microsoft has backdoored all of windows on the behest of the shadow intelligence monster ruling our world from behind the curtain.

While some have found many of the NSA revelations shocking, there's really been only a handful of new things: audacity, and a surprising (but small) violation of the NSA mission (to keep USA safe).

The former comes down to crudely monitoring elected officials from allied countries, the second the assumption that NSA might have intentionally weakened crypto (and by so doing hurt the US, and the US Military, US companies). Many veterans from the cryptowars of the 90s were surprised by that.

Now, just because a spy agency is good at its job, doesn't mean that no-one else is. I have little love for MS, Intel, AMD (actually I do love AMD. Who doesn't love an underdog? ;-) -- but I very much doubt they are complicit in some kind of grand Clipper chip scheme. They all want to sell to both the US, Chinese and Russian military, for one -- you can't do that if the equipment/software is useless.

Now, Siemens (and perhaps MS) might have helped the US with the operation against Iran. That's nice and patriotic, and probably paid well (if not in money, with contacts, further government contracts etc). That doesn't mean Siemens intentionally sabotaged the development of the stuff they sold Iran -- it just means Siemens is as incompetent and rushed as the rest of the tech industry. It's not quite the same as being malicious. Well, not the same as doing "malicious engineering/product development".

Did the US Navy sabotage Tor? Unlikely. If they did, it was masterful subterfuge. If the "great conspiracy" can't get it's act together in sabotaging the armoured humvee's they've left to an actual enemy [1,2] -- do we really think they manage to organize around the long game of deploying secure, hidden backdoors in windows, years ahead of their use?

No, I don't think windows have hidden backdoors. It might have more security holes than a swiss cheese, and I generally run GNU/Debian anyway.

I'm open to being completely wrong though. Show me that a typical mainboard+ram+cpu combination is open to a hidden, intentional (even if masqueraded as accidental) backdoor, and I stand corrected.

In the mean time, lets just try and make stuff that works half-way the way we intend it to, and that includes the whole system. In this particular case, it means that we throw out CAs that we have no reason to trust (that reason being a) we don't need them, they don't currently certify anything we need to trust b) They're incompetent, c) they're hostile (eg: foreign government front/owned) -- and we reduce our attack surface.

Add a meaningful capability system "NORID can only sign .no-domains", pinning and some other stuff to reduce the scope of CA power (now everyone is critical, because if someone has any one key, they can mitm everything. That's just nuts).

Basically, I think the NSA is mostly a bunch of useless muppets, and I don't see why we should keep making their job easy. Especially as I'm in Norway, and so, while technically in an allied state, we've seen that that means fuck all. I'm not part of the scandal in the US, I'm not a US citizen. It's part of the above-board, initial brief of the NSA that they're right to try and steal all my data. And that hasn't changed.

[1] http://www.lobelog.com/down-the-iraqi-rabbit-hole-again/

[2] http://www.businessinsider.com/isis-turning-us-humvees-into-...

Nobody checks which root certificates are currently trusted on your machine(s).

False. See slide 73 and forward in http://www.slideshare.net/zanelackey/attackdriven-defense

I did nuke most certs on my Android/Cyanogen-mod phone at one point, and then added back a few at a time. Their list, while I can't speak for the concrete CAs, looks about the right length to me (slieds 75-76):


Hmm. Are these actually new? Did the OP look at Microsoft's full certificate store and notice some additions, or did they look at their local machine? Because in the latter case, Windows does not include a full certificate store. Rather, it fetches them on demand.

EDIT: I googled the first two. "GDCA TrustAUTH R5 ROOT" and "S-Trust Universal Root CA" are both new certificates (~Novemeber 2014). The latter is in Firefox already, and is a new SHA-256 root certificate to eventually replace a SHA-1 certificate for an existing CA.

The CA system is so broken. Letting a vendor decide who you should trust to issue intermediate certificates that your software will automatically trust to not issue illegitimate certificates doesn't make the system particularly trustworthy, especially seeing as they are still adding new root CAs for entities with unknown agendas (governmental agencies etc).

Maybe the system could be changed to one where domains can only be signed by the DNS registrar, or something.

It's pretty crazy that any CA can issue a certificate for any domain. And paying for more validation doesn't help you at all, since it won't keep the "bad guys" from getting an illegitimate one from a cheaper/lazier CA.

> Letting a vendor decide

The problem with the root of trust problem is that you have to trust someone. Who is your preference? The government? The registries? ICANN? The hardware vendor? Those are all suspect too, and most have been historically compromised.

The only thing I can see that helps is "community" auditing -- things like Chrome's pinning and ... the action that discovered these new certs. So, as much as it sucks, we're sort of doing the right thing already.

See also Certificate Transparency (RFC6962) which I haven't seen mentioned in this thread yet - the goals of which are to require all certificates be written to publicly auditable, tamper-proof logs in order to be considered secure by clients. In such a manner certificate mis-issuance will be easy to detect for interested parties. As of earlier this year Chrome is already requiring Signed Certificate Timestamps for new EV certificates in order to display the EV indicator.

https://tools.ietf.org/html/rfc6962 http://www.certificate-transparency.org/

(disclaimer: I work on this project)

> Who is your preference? The government? The registries? ICANN?

No, silly: the domain owner! Something that's already possible with DANE: the domain owner gets to decide, as it should be, by publishing information about valid certification into the DNS.

Of ALL options this one is the least crappy IMO.

You are assuming that DNS is a valid place to store security information. While it can be in many places it cannot be in general due to the lack of authentication.

And if you require some level of authentication you are back to the same problem.

I'm not assuming anything, and I disagree because I think it's a different problem, not the same one. Also, did you read my last sentence?

Your phrasing is hard to follow. I was assuming you meant that all information necessary to validate the certificate is attached to the DNS record which is crazy as the primary reason for SSL certificates that are centrally signed is when you don't get the right DNS record for some reason (roughly speaking).

If you are instead saying that you should avoid having a URL in the certificate itself, validate it as normal and then use the DNS record to match the certificate to the URL I guess that could work. However you still have the above problem where anyone who has a valid certificate at all can impersonate any website by injecting a DNS record.

And who authenticates the domain owners? Your solution is basically "trust the registries" then. OK, fine. But they've screwed up in the past and they will in the future.

If the registry cannot be trusted, SSL kinda goes out the window anyways since a malicious MX DNS record or WHOIS email entry is usually sufficient to obtain a certificate.

Since we already are at the point where the security of the system hinges on the registrar, why not at least limit the damage potential by only accepting certificates issued by the registrar in question? Instead of letting any CA under the sun ALSO issue these?

PS: If certificate issuance was a required and expected task for all registrars, it could even lead to $0 certificates, which would surely boost HTTPS usage. It would even provide an incentive to pick a thorough registrar if you are concerned about security and validation, since no other entity could issue illegitimate parallel certs behind your back. Brings some meaning back to being a high quality authority (even charging more for more trust) instead of the race-to-the-bottom we have today. Sloppy registrar-CAs would lose business because their trustworthyness actually means something for once. (You wouldn't register your serious domain with a registrar known to have poor validation checks in place for issuing certificates, for example)

> If the registry cannot be trusted, SSL kinda goes out the window anyway

You seem to have gotten things backwards. If the DNS is hijacked, SSL is the only thing protecting a visitor from being lured into the hijacked site.

Connecting these two things into one makes SSL effectively worthless.

If DNS is hijacked, the hijacker could also get valid SSL certificates (using MX records to get confirmation emails, etc).

If it is a EV certificate, that would certainly not be sufficient.

Hmm, I think it is you who have things backwards.

They way it is now, hijacking DNS (between the domain NS servers and any CA) would allow an adversary to request and obtain an illegitimate certificate by way of domain validation. If the registrar is the only valid issuer of certificates, this loophole is closed since there is nothing for the CA to verify - the registrar already knows who the customer is and can offer certificate signing without an insecure DNS based domain validation.

For end-users with a hijacked DNS, the registrar-issued SSL certificate would still protect the transmission, because the DNS-spoofing adversary would not be able to present a valid certificate signed by the registrar. In fact, in today's CA environment, if the DNS hijacker is playing ball with a rogue CA, they could also spoof the SSL. With my suggestion, they couldn't, unless the rogue CA is actually the registrar, because the browser would see that the SSL certificate was not signed by the appropriate registrar.

You could create a DNS-style CA setup like this:

* Browsers would ship with ONE root CA, the public key of the "." root zone operator

* Each TLD ("com", "org" etc) has a CA signed by the root zone operator, valid only for signing TLD registrar CAs. The root zone CA could publish a signed list of TLD certificates daily/weekly/monthly (they shouldn't change too often and the number of entries would be relatively low). Anyone could compare notes and see if these change. There aren't any hidden intermediary CAs. Browsers could sync this list daily/weekly/monthly, the deltas should be minimal. ISPs could even provide mirror services, because the whole list is signed by the root zone operator anyways.

* Each registrar under a TLD has a CA certificate signed by the TLD operator, valid for only signing secondary domains under the given TLD. (i.e. customer domains). Just like the RootZone->TLD signed list of CAs, the TLD CA operator could publish a signed list of registrar certificates daily/weekly/monthly (again, they shouldn't change too often and the number of entries would be relatively low). Again, anyone can compare notes and see if things change. No hidden intermediary registrars. Browsers could yet again sync this daily/weekly/monthly and ISPs can mirror the list since it's signed.

* Registrants of customer domain ("example.com", "example.org" etc) can request a domain-specific CA at their registrar, and only at their registrar. This domain-specific CA is valid only for signing leaf domains, i.e. "www.example.com", "mail.example.com". With the domain-specific CA in hand, the customer can create their own leaf domain certificates "at home". The registrar should offer non-EV domain-CA certificates for free (there is no work to be done to validate domain ownership as the customer already has an account there), or charge a fee for EV certificates.

* There needs to be a way to determine who is the valid registrar for a given domain. This could be for example a TLS-based machine-readable WHOIS service address operated by each TLD. The TLS-based WHOIS service would negotiate TLS with the same TLD-CA-certificate as specified in the root list. Thus, a browser can connect to the TLD's WHOIS service and validate that a given domain is under a given registrar, and that the leaf SSL certificate is signed by the domain CA certificate, which is signed by the registrar CA certificate, which is in the signed-by-the-TLD list of registrars, which is in the signed-by-the-root-zone list of TLDs. This stuff could also be cached at the ISP level because everything is chain signed.

Maybe this is how DNSSEC works (I haven't looked into the details), but I think this would be a pretty neat way to limit the damage done by rogue CAs.

> Maybe this is how DNSSEC works

Not exactly, but it's very similar. DNSSEC has no concept of registrars, and each zone is entirely signed by the same entity.

You'd trust that single entity to inform the registrar anyway, so descentralizing it gains you nothing.

And so have the CAs.

I would have to think about it but I'm sure there is SOME way to verify someone (either in person or digitally). This process would only have to happen once to verify the CA cert for their domain. After that - if their CA cert is compromised it's not a big deal because it only affects one domain.

Oh - and any hoster that allows it's employees to reset virtual machine passwords or make account modifications via email/support ticket should NOT be allowed to participate in this scheme.

huh? No, for DANE it is required to have DNSSEC in place. Also read the last sentence of my previous statement again.

And now you're trusting the registrar, the registry and ICANN. It's always pick a poison.

Also, DNSSEC still has some severe issues in practice. I'd be glad if we could get DANE widely deployed for mail servers.

Edit: Oh, and if you're not using a validating resolver yourself you're also trusting that you're ISP is using one and not manipulating the responses.

> Oh, and if you're not using a validating resolver yourself you're also trusting that you're ISP is using one and not manipulating the responses.

I really don't understand why people keep repeating that complaint. Of course, if you don't check the keys you don't get any security.

How is that a problem of the algorithm? And how is that a problem on practice? If you want some real amount of security you check the keys, being them SSL certificates, DNSSEC signatures, or whatever else encryption system people put on place.

You're speaking from the perspective of somebody who could set this up for himself. "Normal" people don't know stuff like this, but we can't leave them unprotected.

That's why DANE for mailserver is such an attractive target. They're usually run by people who know what they're doing and it helps bring a lot of infrastructure into place.

The point is that there's nothing for normal people to setup (or, at least, it does not have to be). Your email software should verify DANE keys, just like your browser verifies TLS keys.

The fact that current software is hard of configure is just a symptom that it's badly designed. The only inherently hard thing in DNSSEC is distributing your domain data (not really harder than setting our server for TLS), and normal people do not do that.

You still have to trust someone.

I think the trust model is broken for the same reason it can be said three can keep a secret, if two of them are dead.

How can humans create a software or hardware system that absolves us of the issue of trust, when people are, and have always, been up to no good.

The courts are full of people who claim other people have acted outside of good judgement.

I don't think it's possible to have trust without its opposite.

I'd like to be proven wrong.

In the early days of internet commercialization I thought issuing trusted certificates would ideally be provided by banks. Money transfers are all about trust and securing information, it seemed like a perfect fit. You get a smart card from your bank that holds their root certificates and pop that into any computer to authenticate an email or website. It also holds your private keys for authenticating yourself to others for online banking, voting, etc.

Of course I've since learned how awfully inertial the banking industry is and that the security of financial transactions is a lot of hand-waving and wishful thinking. On top of that, there's outright corruption of banks profiting from theft and always cooperate with governments.

So the answer is, you shouldn't have to trust anyone. The security infrastructure must be built assuming hostilities are everywhere. Any trust that is given should be limited to just what is necessary to accomplish the task at hand.

You let the USER decide who to trust, as they are the only person who has enough information to do so. Maybe they trust the community for some things, maybe they trust a government for a different set of use cases, and maybe they trust their nerd friend for other stuff.

If there's anything we've learned from the last 20 years in security it's that the user is the least fit to decide. They are bad at passwords, they are bad at clicking things in email, they just are not security conscious. "Click here to trust my CA", how's that going to end?

If anything that is an education problem, which is further compounded by the trend of wanting to turn users into completely passive entities who don't know or have to do anything, because then they are more easily controlled and manipulated.

The industry does not encourage users to think and learn. The industry wants its software to do the thinking for them. It doesn't want users to think about trust at all. I think that is the scariest part of where things are heading.

Users in the past had every opportunity to think and learn, and their failure to do so is why we are where we are today. I mean, there are still people who refuse to wear seatbelts or vaccinate their children... How can we ever expect people to understand all the moving parts in a secure computer system?

That's the great fallacy though. The amount of technical knowledge required to make truly informed security decisions is out of the question for 99% of all computer/mobile users.

It isn't about being passive; it's about having lives and jobs and other things to do with their time. A heart surgeon may be a genius but he doesn't have time to keep up with the latest research and teach himself PKI, certificates, crypto, and permission models across all the platforms he uses (and keep up to date on those too!)

Users will not waste their time thinkin about trust. The history of our industry is ample evidence that it would be a losing battle. Not to mention a single mistake can be ruinous, requiring absolute perfection 100% of the time (from users and programmers).

Not to mention a single mistake can be ruinous, requiring absolute perfection 100% of the time (from users and programmers).

One can hope that, eventually, formally mathematically verified open source systems will help solve the issue, at least on the programming side.

I doubt the halting problem will be solved.

The solution to the halting problem is to stop building universal/Turing-complete machines.

Edit: since this was downvoted, I will elaborate. The halting problem only applies to arbitrary programs on Turing-complete systems. It is possible to create useful systems that are not Turing complete. As a baseline example, it is easy to tell that the following Ruby program halts:

Problems arise when one allows unbounded loops and unbounded recursion. If all loops and all recursions are provably finite, then a program can be proven to exit. All sorts of useful things can be done with systems composed of small, provably terminating programs. And it turns out, the vast majority (if not all) of the things relevant to securing a computer system can be implemented in formally provable ways.

They are bad at passwords...

That's like saying they are bad at juggling with one hand while riding a unicycle. Passwords are the bad thing, and users have been subjected to them for quite long enough.

Case in point: users handing over passwords:


Agree - most users of computers do not know anything bout infosec nor do they care.

14% of adult Americans (about 30M people) are functionally illiterate[1]. They are bad at reading things in snailmail, many just don't see the value in it. We have records, stories, and even myths that seek to warn people about the disreputable men who take advantage of their mark's ignorance.

Does society try to keep these people illiterate? Does we give up, accuse the millions of illiterate people of not caring, and find ways to hide important things from them? Do we simply let them stay ignorant while making sure we hide any books when they are around, because books when they are around "because they don't care"? While I'm sure there have been been people who have done these things, we - as a modern society - generally find the idea of being illiterate to be a serious problem; serious enough that we spend significant amounts of money on education programs and other attempts to fix the problem of illiteracy.

Technology has created a new literacy. I'm not talking about programming or suggesting that people need to know HTML or anything. However, if you want to function in modern society, you need to understand some of the evolving language that society uses. This new literacy includes things like understanding that strings of the form "xxx@yyy.zzz" are probably an email address, and when you read something that starts "http://" or ends in ".com" or ".net" is probably referring to a website. The fact that the educational system has yet to catch up to the changes in technology is not a reason to force decisions on people, nor is it a reason to accuse them of not caring; instead, we should be interpreting the problems people have with security as a sign that a lot more education is needed, and we need to find ways to help people learn so they can protect themselves.

Unfortunately, there are a lot of people in the tech community that are confusing "frustration at not understanding obtuse technology" with "not caring about security". Many people do care, but lack the slightest idea about how to even approach important technical decisions. Certificates, keys, signing, and the like are not what matters, and are minutia tha should be left to those with proper technical knowledge. What matters in security is where the trust is being placed, and that is something a lot of people are skilled at.

Obviously, change isn't going to happen overnight. My suggestion is that we need a way to let people at least start down that path of managing trust. A good first step would be simply exposing the process so people can learn that a trust decision is even necessary.

[1] functionally illiterate is defined as those adults with a reading level below the basic requirements necessary to "perform simple and everyday literacy activities". Data is from https://nces.ed.gov/naal/kf_demographics.asp

and what should the default be for non-technical users? no trust at all? And asking each time "do you want to trust this root certificate"? (because we know how well that works....)

No, we need a system where different types of trust can be selected. (see my other post)

A PKI system like we have now is one of them, but we need a system where that is not the only available trust. Yes, it will be necessary to educate people about this topic (and never discussing it with the user will only keep them ignorant). Right now we have a system where users are forced to trust various CA, and if someone decides they want to trust something else, that is difficult or impossible.

The problem of WHO to trust for a particular website (that is, every website) is a separate problem which can be solved over time. My point is that we don't even have the tools to let people even being to solve that problem.

>and what should the default be for non-technical users? no trust at all?

Yes. They should pay a service to keep track of who they should trust. They wouldn't be opened up to any more risk by this than as with the status quo, MS + Apple + Google would end up giving it away for "free" for those don't have a problem with that or don't know better, and the option to get more technical about issues of trust would always be available to them.

You already have the tools to change who you trust. If you are afraid of automatic updates to your trusted domain list then setup a VM to test the updates or use a script to verify when the list changes.

ICANN's the obvious choice since they are in charge of assigned names and numbers. They are controlled by a board, who could use a k-of-n signing scheme to sign off on their decisions, to include delegation, and their successors.

You're absolutely right, and situation is much worse actually. Some time ago another “feature” was introduced — “key pinning”, for some reason it's compared to standard ssh client behaviour (explicit key for given address). But it's nothing similar actually. If MITM have access to any of root certificates installed on your device then he/she could create “valid” certificates that won't raise any warnings in the browser.

There is some addon (if I remember correctly) for Firefox that binds specific key to specific domain and raises warning if certificate changes.

I think you may be referring to Certificate Patrol (https://addons.mozilla.org/en-US/firefox/addon/certificate-p...)

It still works to hinder that guy who sits at the coffee shop trying to man-in-the-middle attack people connected to hit fake wifi.

If you want a system that can hinder a government doing that, or a large organisation, or a corporation. Then look elsewhere.

Do you remember to wipe your hard drive when you get your computer back from the repair shop? They could very well have installed a new root CA without you noticing....

I always recommend people to take out the hard drive before sending it in. This has caused some small issues, but most stores don't seem to mind as long as you are very explicit about telling them.

I wipe before AND after, if I have to hand the computer in for repairs. :)

I agree it works against most adversary's.

Wiping your HDD might not be enough: https://blog.kaspersky.com/equation-hdd-malware/

Theoretically you could hide malware in your GPU, USB, FireWire, webcam, NIC, baseband, Bluetooth, where ever.

No matter who you are or how smart you think you are if your adversary is determined and has the required resources they are going to get you.

SSL: Even in 2015, still more concerned with the local neighborhood boogeyman then Government intrusion.


as if it were less guilty of the same crimes.

I still don't understand why there isn't a public private key for each connection. We seem to accept this broken style of encryption instead of pushing for real encryption.

Like session keys and PFS?

It's good that someone noticed... if you have automatic updates enabled, you have implicitly consented to giving Microsoft what is essentially a root account on your system. They can modify it to both fix - and break - things just as easily.

if you deploy an os from a company, consider your box belongs to them.

Let's be honest - if you don't vet every single line of code in your OS and software toolset, you run the risk of exposing yourself. There are levels of trust to be sure, but there is always trust.

Let's be honest - if you don't vet every single trace and circuit in your hardware, you run the risk of exposing yourself.

Just trying to further emphasize your point, not be obnoxious. The truth is, there's almost no possible way to not expose yourself. Anything made by humans can be abused by other humans for personal gain.

Honest is that if all the source code that is compiled is available as open source and a binary with the same signature can be built using that code then the chances that the code acts against your interests is much less than binary blob that you can not vet... etc.

I'm a Debian user: https://reproducible.debian.net/reproducible.html

Partly because they do care about these things and they are sending patches upstream as well so as many as possible applications in Debian can eventually be build reproducible.

I hadn't seen it before, but even better they added this piece of text on that page: "we care about free software in general, so if you are an upstream developer or working on another distribution, we'd love to hear from you! Just now we've started to programatically test coreboot and OpenWrt - and there are plans to test Fedora, FreeBSD and NetBSD too."

Let's be very clear: if you still think you are 'buying' software from a company like Microsoft you are just fooling yourself. You are paying for a license to use the software. That company still owns the software.

If only NCC/OCAP would audit Windows.

Parts of NCC have audited parts of Windows. They just weren't allowed to publish the results.

This has definitely significantly improved the security of Windows, though not given users any kind of protection against malfeasance by Microsoft (including in the sense of giving people malware or simply less-secure new versions of stuff through Windows Update).

I don't think automatic updates are used for the root certificate store. They're fetched on demand.

Yes they are fetched on demand, but that doesn't mean they have a way to update the root-CA list through Windows updates.

For example Windows Updates is used to blacklist CAs certs (like in the case of DigiNotar). The step to adding CAs certs over Windows Updates is really small.

We need dynamically selectable trust.

I don't mean the simple ability exposed in most browsers to add/remove certs. That still assumes one set of trust that used globally, which is completely incorrect.

Maybe I don't trust $COUNTRY to handle their root certificate for most uses. Currently we handle that case by removing the cert completely. Trust, however, is not a simple boolean value, and maybe I do trust that certificate for $COUNTRY's official government pages. I should be able to specify that I trust some certificate for some domain (or other, non-domain based use!), but not for others.

As another example, consider a local Web Of Trust. whenever Web Of Trust is brought up, people complain about the difficulty of key exchange. Well yes, that's a difficult problem, but there is no reason that it has to be solved for all use-cases before anybody starts using it. Maybe a circle of (usually physically local) local friends want to have secure communications. They can share a key in person easily, and so it should be easy to give access to a private forum by simply sharing a key/cert on a USB disk.

We can currently approximate those cases, but it is not well supported, and is certainly not something that most users would be expected to be able to do. We can fix some of that with a better UI, but I'm suggesting a far more fundamental change, because actually solving problems like key sharing will not be easy, and I suspect they will only be solved once we have infrastructure in place. HTTP was successful because it did not require that everybody implements the full, fairly complex specification. Instead, we had a fluid, extensible protocol that allowed anybody to extend it, and that allowed for the development of a wide variety of software.

The problem with traditional PKI (at least as implemented) is that it assumes that we can assign an absolute trust value to anything. In reality, trust is relative, and may in fact have multiple values at the same time. Until software is designed around those realities, it will always be inflexible and insecure for any use case where the needed trust assumptions do not reflect the assumptions made by the authors the software.

Unfortunately, I'm a old-style UNIX nerd who is fine with using GPG, and I'm not sure what the UI for a dynamic-trust system would even look like. sigh

They can share a key in person easily, and so it should be easy to give access to a private forum by simply sharing a key/cert on a USB disk

The truly paranoid would probably prefer a printout on paper; I've actually done this before with HTTPS self-signing.

One idea that might be both workable, and possible would be to a) have reasonable way to choose CAs (not just, oh this, XYX corp sounds trustworthy, I'll let them in), and b) have a capabilities system that's not entirely embedded in the CA system.

So, I could have my own CA, or GNU/Debian could have their own CA -- and I/they could sign a capabilites document allowing NORID to sign off on .no-domains, but not on any other TLDs. Or I could trust Microsoft and Google with their own domains (a whitelist), and nothing else.

Such lists could be distributed in a similar way to certs and revocation lists -- and might take us from a 100% broken system, to a 80% working system -- which is probably as good as automated(ish) trust is going to get anyway.

[edit: I don't think delegating trust is completely wrong. It's insane to think that 10 billion people will get the required knowledge to make good trust decision for software etc. Most people can't even do that for normal stuff like money.

Anyone remember nt-bugtraq? I loved that. I still can't read official MS security bulletins. I usually only see that there's a patch, rated at some comically low importance, and usually have no idea if I really need the patch or not. And I consider myself rather technical, and while windows isn't my prime platform, it's not completely alien either.

The summaries on nt-bugtraq was gold. "This is remote root in default install of IIS". Ok, I don't run IIS. Note-to-self, make sure this patch is installed if I do. "This is local login bypass in GINA". Ok, lets install that, prioritize public facing machines. Etc.

That is why I'd like the ability to trust third-parties to help manage trust. I don't think MS is maliciously complicating their security bulletins, I just think they're trapped in an obscure dialect of Coroporatese that I don't speak, and really have no desire to learn.

Maybe I should order a few copies of "On Writing Well" and ship them to Seattle, att: MS security team.]

[ed2: Actually:


is better than what I remember. Note that the page doesn't mention certificates, and the new update doesn't appear to be mentioned on: https://technet.microsoft.com/en-us/security/advisory


I agree with you on this. The only way we can trust, is if we know thee people. And even then it's normally not 100%.

For example, if you want to take someone out to a nice restaurant. You might ask friends and colleagues, but some might have different tastes or some might just have an incentive for you to go there. So once you have a few options, you can then Google those choices to see if they match what you are looking for.

So for something as simple as a dinner, we are already doing several checks to validate the place and comments that people have made.

The issue comes when the web is involved, as we expect instant reaction and response. We also hate having to answer repeated question as it interrupts our though process. So we need a web of trust, but also with the option of sharing in you personal web of trust. So for an example, I want to go to page: newcorp.com, but the certificate warns me it's not allowed by my chosen web of trust. So I query so see if anyone on my local web of trust has an allowed setting for this page. It finds that Bob does, but I have set Bob as a low trust setting so it won't allow me to go there, but it will say that Bob's web of trust does. Do you trust Bob's choices to allow access?

This would at least allow people to make choices of trust that they understand. As humans we thrive in groups, as long as they are not to big, and the Internet is a huge group of people. It's a lot easier to choose if I trust Bob from the office then for example Google, ICAN or even Mozilla.

That way, the non-tech people can see the web of trust of their tech friends and make a choice on what they have set.

It's a thought, but I think we need to bring trust and security back to our local, controllable environment. The Internet will keep on integrating with our every day life's in ways that are hard the even think about now. So we need to figure out security and trust now, not once my jacket is talking to the street lamp telling it that the light should be lower because I had eye surgery yesterday and should not have strong lights around me. How the hell do we add security to all those types of functions that will come.

Sorry for the long rant, but it's something that worries me about the future.

Douglas Crockford has an interesting video on why CA system is broken and how it can be fixed in his recent "upgrading the web" talk at: https://angularu.com/VideoSession/2015sf/upgrading-the-web

Essentially he's proposing side-loading a new application under a custom url scheme so that the browsers will launch a helper app that's used to handle web applications with the following url format:

    web: publickey @ ipaddress / capability
The url contains servers ECC521 public key as part of the url so that it gets around the CA System and clients can just encrypt requests with the servers published public key directly.

He's planning on developing the helper app that's based on a sand-boxed node.js and QT application which just uses a TCP session to communicate with the server.

Problem with 'ECC521' is that Google have all but killed P-521 support in Chrome. https://code.google.com/p/chromium/issues/detail?id=478225

Mozilla are considering doing this also. https://bugzilla.mozilla.org/show_bug.cgi?id=1128792

This is not just browser support -- its TLS library support.

I had an interview with Douglas Crockford 2 years ago to work on a project that sounded a lot like this. I'm glad to hear it is out in the open.

"RXC-R2" is certainly insufficiently verbose.

I think the author might be a bit behind the news on Tunisia, though.

For anyone wondering, it's the Cisco CA/B-F Root CA:


They should know better.

From that document:

3.1.2 Need for Names to Be Meaningful

The Issuing CA shall ensure that the subject name listed in all certificates has a reasonable association with the authenticated information of the subscriber.

The hypocrites can't even manage that with their own 'RXC' CA name.

One might gently point out that the SubjectName in that cert is "C=US, O=Cisco Systems, CN=Cisco RXC-R2". Microsoft just seems to have abbreviated it to "RXC-R2".

Does anyone know why Cisco would need a root CA on Windows machines...?

Maybe as part of a way to secure the connection to their appliances? They could auto issue certs after verifying ownership. Maybe they're gonna give appliances a name off their own DNS <whatever>.mycloud.cisco? Hell maybe they're becoming a real CA and gonna sell trust services.

Dunno how MS's process is, but with Firefox there's a nice application process that's very open and you can see their claims.

Indeed. https://wiki.mozilla.org/CA:How_to_apply

However, nothing to bee seen in bugzilla about Cisco regarding their new CA. I wonder if it is MS only?


Requiring Internet Explorer for switch management sounds like exactly the kind of stupid Cisco might go for.

These days Chrome/chromium uses the system store AFAIK, so at least you can get surprising breakage in your management tool that isn't due to TLS, but rather some kind of IE compatibility hack... ;-)

To have $largeCompany buying their security suite where they can MITM on the proxy/appliance/gw/whatever without generating any alerts on the client side?


If they misused the CA for that, there'd be a lot of backlash (cause it'd be detected). Apps would start blacklisting it. Seems like they'd have a better plan than that.

I don't think it is intended to be undetected. Plenty of large organizations use security appliances that MITM all traffic. Though I'd prefer they do it by pushing their own BigCo certificate to the boxes they own than relying on a cert that exists in all copies of Windows.

They want people to be able to BYOD and not know what's going on. (I'm sure that most employees were informed in some opaque memo, but that only goes so far.) "Our shared network works best with Microsoft phones." Hey, this is a new marketing effort for all those crap Lumias they're trying to unload!

So you're positing that Microsoft is going to intentionally destroy their TLS-CA verification system so any company can compromise any Windows device in the name of BYOD, without explicit action from the user?

As we've been told many times, "it ain't a bug it's a feature". Sure, you and I don't want to tell the DLP guys our bank passwords, but most people already don't care. It's not feasible anymore to just block TLS, and supporting client proxy config is so tedious.

Think about how it'd have to work. By including a "compromised" cert that any company can get use, and that every Windows customer trusts, the CA system would be entirely destroyed. Microsoft wouldn't do that - it makes zero sense. Cisco might have something that having a CA for makes easier, but including a MITM cert in every Windows install is not close to reality.

I'm not so sure. Before the "encryption fad" most traffic would go unencrypted to a Cisco device (router, switch, proxy, vpn concentrator (the inside of the network)) anyway.

As far as I know, The US has almost non-existent privacy laws when it comes to what corporations are allowed to do/demand to do to their employees through contracts wrt. traffic on company equipment.

Forcefully and silently intercepting traffic on employee networks would AFAIK be illegal in most of Europe.

Quick question: How did you find that document? It's not even indexed in Google so far.

I used DuckDuckGo.

I saw that and was wondering what he was thinking of. I had to double check to make sure the most recent parliament hadn't been up to any shenanigans, but seems like everything is on the up and up politically there. The ISIS situation is scary, but that's a whole other ball of wax.

They silently installed all the previously existing ones on my system, too. So did Apple on my Mac, and Ubuntu on my old notebook.

Maybe I should, but I am not going to individually double check every root certificate. I don't think I have the means to do so either.

Everything you Need to Know about HTTP Public Key Pinning (HPKP): http://blog.rlove.org/2015/01/public-key-pinning-hpkp.html

> A flaw in this system is that any compromised root certificate can in turn subvert the entire identity model. If I steal the Crap Authority's private key and your browser trusts their certificate, I can forge valid certificates for any website. In fact, I could execute this on a large scale, performing a man-in-the-middle (MITM) attack against every website that every user on my network visits. Indeed, this happens.

> HPKP is a draft IETF standard that implements a public key pinning mechanism via HTTP header, instructing browsers to require a whitelisted certificate for all subsequent connections to that website. This can greatly reduce the surface area for an MITM attack: Down from any root certificate to requiring a specific root, intermediate certificate, or even your exact public key.

related previous articles:

Firefox 32 Supports Public Key Pinning (188 points by jonchang 304 days ago | 100 comments): https://news.ycombinator.com/item?id=8230690

About Public Key Pinning (72 points by tptacek 43 days ago | 5 comments): https://news.ycombinator.com/item?id=9548602

Public Key Pinning Extension for HTTP (70 points by hepha1979 242 days ago | 28 comments): https://news.ycombinator.com/item?id=8520812

It's easy to call the CA system broken (and I think it really is), but other, better solutions are not that easy currently.

We want an open, yet secure web, anonymous at best. With the current setup, that is not so easily possible. Letsencrypt might help, but even with that, there is still someone you need to beg for a signed cert.

Maybe we need to think ahead.

@svenfaw, your RCC scanner says "Exiting... [Reason: signature database appears to be out of date.]", why is that?

Sorry about that, the website still had an old RCC build. If you download it again, you should get the current version (1.49).

For information: OpenTrust (3 roots) is just the new name of the entity that was once Certplus (1 renewed cert).

one word. or 4 letters. understand them and you understand everything else about this issue:


It's okay, most apps don't bother checking certificate validity anyway.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact