They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates. Also Microsoft began to heavily spy onto Windows users as part of normal operation making it difficult to impossible to fully opt out.
I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless. There would be very little positive gain yet a whole lot of negative blowback from doing such a thing.
MS engineers can login to your machine and run programs / download documents. There also is some keylogger that sends data back without warning you. I can't remember which bits you can turn off, which bits got backported to 8/7 without warning, etc.
To make a long story short: From what anyone can tell, there is no way for consumers to obtain a version of windows that has security patches and has the ability to run with sane privacy settings. There is an acceptable version called Windows LTSB, but you have to pirate it.
This has been discussed ad nauseum on HN and elsewhere.
Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?
If you are suggesting that, are you suggesting the trust root for that particular stack is something other than the vendor? If so who?
Take the example of Windows. Let's say they agree to put in a backdoor like DoublePulsar. Microsoft release the official OS and say 'we promise this is all good and only stuff that should be in here is in here. Honest.' How do we as third parties detect they've put something in there that shouldn't be?
I see you're CEO of verify.ly and have some background in this, so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.
> so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.
"Closed-source" certainly does not mean you cannot see the changes, just that far less people know how to read assembly/machine code to understand what is going on.
People frequently reverse engineer patches and updates as addition of features means more vulnerabilities. Security companies generally get a whole lot of free marketing in the press if they find and disclose major vulnerabilities (along with building detection/prevention into their products, so there is a large incentive there. Of course it requires trusting security companies to not hold back findings like that, a valid concern, but it at least a step up from completely trusting the vendor to deliver non-backdoored updates.
> Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?
The security researcher mindset would be along the lines of "How does this new added/changed functionality work, and how could it be abused?" (You are correct that there is no guaranteed manner to find this, otherwise all software would be un-hackable which is not the case).
> They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates.
> I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless.
It would seem to me that these things are happening. 0days are being added (often to look like simple bugs) and security companies are detecting them and we're talking about them...eventually. So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.
Take the example in this thread - EternalBlue. That particular flaw was introduced in XP wasn't it? And it survived all this time despite the uncountable security researches pouring over the code for a decade and more. It took a hack to reveal these tools.
Maybe the EternalBlue exploit really did just exploit a bug. Maybe it was a backdoor. It doesn't matter though. If it was a bug, it lay undiscovered for years which means there's plenty of opportunity for an actual backdoor to remain undiscovered too. So we have to deal with the possibility that 'exploitable code' (however it originated) may be around for decades and can be in every system as a result.
Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.
What about Heartbleed. This was another piece of 'exploitable code' that was around for years undetected. The example of this are no doubt many.
It would seem to me then that there are plenty of cases where a 'backdoor' has been placed and plenty where a genuine mistake was made, but we can't ever really know which is which.
I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.
> So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.
EternalBlue was a vulnerability, not a backdoor, as a backdoor would imply it was intentionally inserted. Again, any proof of malicious code being intentionally inserted would be huge news and would permanently kill trust in the vendor.
> Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.
This would be huge news. A negative cannot be proven, but it would not really serve much benefit to theorize about intentional backdoor insertion without proof. Anger at something like that is best saved for a provable case (Think of it this way: To a non-tech person, it would be great for them to be able to express outrage/call their reps/etc when there is definitive proof of this, versus saying "oh I heard this was already happening so whatever").
> I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.
There is nothing wrong with being overcautious. Problems arise when worrisome conclusions are reached, causing some (for example) to be unsure about the safety of automatic updates. The effect of this would be users avoiding a perceived risk of a malicious update, yet allowing them to be more exposed to real known vulnerabilities by not installing important security patches.
If you are referring to the level of analytics gathered, I fully agree! My point is, there would be a similarly loud reaction (at a wider scale) if a backdoor were introduced.