The idea that a major government malware contracting effort was required to pop a particular Hungarian CA (presumably for deniability reasons) tells you something. The USG virtually undoubtedly controls several RSA keys that can be used to sign arbitrary SSL/TLS certificates. Why didn't they just use one of those?
I assume it's because they're expensive, and every time you use them, you risk burning the CA they're associated with: the major browser and OS vendors will excise your root keys, or attach constraints to their use.
Kim Zetter, for understandable narrative reasons, uses Gmail as an example of the kind of site that a CA-hijacker could compromise. But Gmail is the dumbest possible site to target with a traceable compromised CA key, because it's key identities are pinned in Firefox and Chrome; if the key indicated over the wire disagrees with the browser binary, the browser flips out.
This is why HPKP, TACK, and similar pinning/continuity/attestation frameworks are such a good idea. Over the medium term, they allow the users of the Internet to surveil SSL/TLS keys and detect compromised CAs.
Like you said, for attribution purposes the NSA had to get its keys elsewhere. I'm asserting that it's not just attribution that's on their mind.
You can't forge certificates without associating them with some specific CA.
If you forge a certificate for a pinned site, you risk detection.
If HPKP is widely deployed, every site could have that risk.
Unless you've popped all the CAs, the browser vendors can respond to detected forged certificates by curtailing the compromised CA. Meaning the NSA has to compromise another CA to continue their activities.
There aren't unlimited CAs to work with.
Stipulate that NSA doesn't care if attacks are attributed to them. Certificate surveillance is still an operational problem for them.
aside from the economic cost of "burning" CAs, attribution itself seems to be a consideration, e.g. in other contexts like TAWDRYYARD or LOUDAUTO where "All components are Commercial Off-the-Shelf (COTS) and so are non-attributable to NSA."
There was nothing like staring down
the barrel of a suspected cyberweapon
to clear the fog in your mind.
Which begs the question. After thousands of different Windows viruses have been identified, why is any company anywhere in the world still using that crap on any of their computers? Let alone why does a CA let Windows computers into their infrastructure?
There is a very strict rule when dealing with firearms. You NEVER EVER point a gun at anyone else. Ever. No joking. No kidding around. You only point a gun at someone if you are prepared to imminently use it against that person. This means, literally, "life or death".
I've been handling guns for over 45 years and I've never even considered breaking that rule, nor have I seen anyone else break it. BTW one corollary to this is "there is no such thing as an accidental discharge, only a negligent discharge".
Heck, I've even "died" in paintball enough times to know that I don't like the odds.
Software, even malicious software, is OTOH "meh". Even if it's written by government hackers. It's simply not scary at all. IMO the analogy of guns to software was probably written by someone who wasn't familiar with firearms.
And you're right about my abhorrence of Windows. For close to twenty years now, everyone has sat around and said "oh no, woe is me, another virus, it's hopeless, it's terrible, I'm scared, my business is threatened, hackers are stealing my secrets". The details change, the general story is the same.
To which my response is: "the only winning move is not to play". So why do people keep using Windows?
And surely the most popular OS will be the one most targeted by criminals no matter what it is.
My heart starts racing a bit when I suspect my systems might be tampered with, so I think the metaphor does apply. What would have been better if the parent provided a better, less hyperbolic metaphor.
Professionals stealing certificates since at least 2009.
It's unfortunate that still today the attitude is "cover it up" rather than disclosure. I would hope that any company that I entrust with my data would be forthright about breaches so that I, as a customer, would have the opportunity to take whatever precautions were necessary given the details of the breach.
CAs' only true marketable asset is trust.
Personally, iff the CA system is something we're going to stick with, there should be substantial legal penalties for failure-to-disclose timely updates on breaches by CAs. Imagine, something that a multi-national trade agreement could actually do that would be beneficial to everyone!
What the heck is this "alternate C" language stuff though? Anyone remember?