Alleged liability. Many seem to think that those actions of the US government are not legally allowed. Most likely there is no real liability for not following a gag order as speech is pretty unambiguously protected from being regulated or circumscribed by the US government.
Except corrupting the source code from which packages are built. At least without anyone outside noticing because the code is public and I bet foreign intelligence agencies that do not trust Microsoft to make IE secure for them are monitoring the change stream.
Why do you think it would be trivial for three-letter agencies to do those things?
Is there a legal mechanism, authority, or track record for such a thing?
If you're talking about Dual_EC_DRBG, that was a non-trivial, poorly-kept secret that failed on launch. An alleged $10 million secret deal, plus development of the algorithm doesn't sound trivial to me.
> Is there a legal mechanism, authority, or track record for such a thing?
The problem is, as a layman, I cannot know. I wouldn't have thought that something like FISA court orders was possible, where you get a secret order from a semi-secret court and you are not even allowed to talk about it.
Who knows, maybe there is a secret FOOBAR law that says agents can force any certificate agency to sign random certificates for them. Maybe some wierd agency you never heard of forced every major manufacturer to include hardware backdoors, and lie about it.
A few years ago I wouldn't have thought that was possible. But my trust that the legal system is democratic and transparent has been thoroughly undermined.
Now, if you run a business and some people in suits come and order you to install a backdoor, and threaten you, and tell you you can't talk about the incident to anybody besides your laywer, you can't do anything about it - and you better hope that that lawyer is good, since otherwise you have no way of telling whether that order is legitimate or not. Those people might as well be criminals, and you have almost no way to find out. Back in the pre-9/11 world, if you didn't recognize the IDs of the, say, FCK agency, you would have phoned around a bit and then told them to f'ck off after hearing their outlandish demands. Because there is no way something like that would happen in our democratic country. You can't assume that anymore nowadays.
A simple "no, I don't actually know" would have sufficed.
I'm not asking you what episode of Blacklist you enjoyed the most, I'm asking you about real life. The lavabit company was served a real warrant made by a real judge, served by a real officer of the court. NSLs are served by real FBI agents with real badges. I'm not debating the anti-liberty essence of an NSL, but the "men in black" fear is completely unfounded by the domain of things that are in the public eye.
The strawman of the "FCK agency" agents ordering people with the threat of jail time to put backdoors in their software isn't backed up by any credible fact. We can suppose and assume all day, but you shouldn't take it for granted that everyone agrees or should agree with you.
*the alleged RSA backdoor was reported by Reuters to be a $10M bribe, doesn't sound coercive to me. Not to mention the NSA has no arrest powers outside its facilities, but sure.
I think jahnu was referring to obtaining the private keys for a trusted signing authority, which would enable said agency to create valid-looking certificates for the purpose of MITM. Weak algorithms are also concerning, but not really the subject of OP.
Making a utility for that would not be too hard. But it would probably kill things like the Windows Update process. The Windows certificate store is system wide, and is used for other things besides http.
The Microsoft Root CA cert is not included in the NSS trust store.
This would break Windows Update (something that Samsung has recently been accused of breaking).
So you have to trust the Microsoft CA Root Cert;
And if you trust that, you trust they won' sign a SubCA cert, which they could do.
If you don't trust your trust provider, don't use their software?
But there are different degrees of paranoia. I'm fine(ish) with trusting Microsoft to sign drivers, updates and applications.
I'm also fine with them signing for outlook.com, microsoft.com etc.
I'm not fine with them signing for wikileaks -- but I also am not really worried about that. I'm worried that some fly-by-night CA will loose their keys, get hacked, etc. So I don't want any more than a minimum of CAs on my system, and I'd like to approve them on a domain-by-domain basis.
Even with good UX, that'd bee way more hassle than most people want -- I know that. But it would've been nice to have a sane option for it.
And also some special control over updates/upgrades to the CA-cert store.
In short, I trust Microsoft to write software, I don't trust them to delegate trust, because they're trapped in the CA racket.
I trust MS software. I don't trust MS to select which CA to trust. Sure, MS could backdoor my os/browser. I don't think that's very likely.
If MS force me to trust, say 800 CA certs, and all of them can mitm wikileaks, the likelihood that one of them could be penetrated by a hacker or a state actor is much higher.
Sure, it's not "absolute security", but nothing is.
There are different concept that one trusts, or not trust:
The os, drivers, bios, hardware. I generally trust that. It may be naive, but I do. However, even if I'm right in trusting that, that doesn't matter if I can't trust all the CAs. Not just from the point of malicious actions by the CAs, but from incompetence by them.
The sensible assumption is that os, browser, drivers, bios, hardware; everything is backdoored. But, and it's a very important but... But, they wont use those backdoors on you. Because everytime they use the backdoors they risk detection, and you are not a sufficiently high value target. The guy looking at your case is not cleared to know about them.
No, the paranoid assumption is that everything is backdoored. The reasonable assumption is that everything is flawed, and that for some of those flaws, certain groups have exploits.
It's bordering on crazy to assume that Microsoft has backdoored all of windows on the behest of the shadow intelligence monster ruling our world from behind the curtain.
While some have found many of the NSA revelations shocking, there's really been only a handful of new things: audacity, and a surprising (but small) violation of the NSA mission (to keep USA safe).
The former comes down to crudely monitoring elected officials from allied countries, the second the assumption that NSA might have intentionally weakened crypto (and by so doing hurt the US, and the US Military, US companies). Many veterans from the cryptowars of the 90s were surprised by that.
Now, just because a spy agency is good at its job, doesn't mean that no-one else is. I have little love for MS, Intel, AMD (actually I do love AMD. Who doesn't love an underdog? ;-) -- but I very much doubt they are complicit in some kind of grand Clipper chip scheme. They all want to sell to both the US, Chinese and Russian military, for one -- you can't do that if the equipment/software is useless.
Now, Siemens (and perhaps MS) might have helped the US with the operation against Iran. That's nice and patriotic, and probably paid well (if not in money, with contacts, further government contracts etc). That doesn't mean Siemens intentionally sabotaged the development of the stuff they sold Iran -- it just means Siemens is as incompetent and rushed as the rest of the tech industry. It's not quite the same as being malicious. Well, not the same as doing "malicious engineering/product development".
Did the US Navy sabotage Tor? Unlikely. If they did, it was masterful subterfuge. If the "great conspiracy" can't get it's act together in sabotaging the armoured humvee's they've left to an actual enemy [1,2] -- do we really think they manage to organize around the long game of deploying secure, hidden backdoors in windows, years ahead of their use?
No, I don't think windows have hidden backdoors. It might have more security holes than a swiss cheese, and I generally run GNU/Debian anyway.
I'm open to being completely wrong though. Show me that a typical mainboard+ram+cpu combination is open to a hidden, intentional (even if masqueraded as accidental) backdoor, and I stand corrected.
In the mean time, lets just try and make stuff that works half-way the way we intend it to, and that includes the whole system. In this particular case, it means that we throw out CAs that we have no reason to trust (that reason being a) we don't need them, they don't currently certify anything we need to trust b) They're incompetent, c) they're hostile (eg: foreign government front/owned) -- and we reduce our attack surface.
Add a meaningful capability system "NORID can only sign .no-domains", pinning and some other stuff to reduce the scope of CA power (now everyone is critical, because if someone has any one key, they can mitm everything. That's just nuts).
Basically, I think the NSA is mostly a bunch of useless muppets, and I don't see why we should keep making their job easy. Especially as I'm in Norway, and so, while technically in an allied state, we've seen that that means fuck all. I'm not part of the scandal in the US, I'm not a US citizen. It's part of the above-board, initial brief of the NSA that they're right to try and steal all my data. And that hasn't changed.
But this line of thinking eventually leads to Fermi Paradox.
The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.
The NSA and other agencies around the world already has you targeted. Tor makes you safer. Tor makes you aware of risks. This is also good since you are now better suited to defend yourself accordingly.