> Your security should never depend upon security of your source code.
For sure. I don't think it's about that, though.
Even if the application in question were open source, if the project lead is willing to cooperate in any way their government asks, they could probable ensure the existence of a back door. For this reason, where possible, I would prefer to use encryption software written by people who are principled to a fault (or who at least do a good job acting as if they were).
Let's imagine Linus Torvalds or Greg Kroah-Hartman in this same situation. Linux source is available, so let's say they were asked to ensure that a certain patch to a cryptographic API was not accepted before a certain window. Maybe the crypto API maintainers were on the call as well saying that they were on board with the plan (apologies to those people, I don't know you and mean no offense). I like to think that they would:
1. Turn down the NSA.
2. Attempt to get the word out about what they had been asked.
3. Find new crypto maintainers.
And yes, it's entirely possible that this is not at all how it would go down. Maybe they would be very cooperative. I don't know any of these people personally. But what I do know is that they have not, as of yet, made a blog post about that time when the NSA did ask them to betray an unspecified user and how they did everything they asked without resistance.
I don't think I'm being idealistic here. Software and encryption are global endeavors. People who blindly believe that the enemies of their state are also their enemies, or even that obedience to local laws is a moral imperative, should not write crypto software. Or at least, I hope they make their beliefs known like this guy did so that I can avoid their software.
Being willing to provide source code is entirely different than inserting a backdoor or purposefully holding back security updates.
One should have zero impact, as I said, modern crypto algorithms do not depend on keeping the algorithm secret to maintain security. Implementation can absolutely be full of bugs that are incorrectly using crypto primitives though.
The other has a direct impact on security.
I don't think one blurs the line to the other, they're fundamentally different things.
If anything, this is further proof that you need to use crypto properly because it literally means someone with code in hands won't have any advantages in attempting to break that encryption.
To me anyway, someone willing to give source code to the NSA doesn't imply they'd be willing to do nefarious things. There's nothing inherently nefarious about giving the NSA source code.
That said, I can understand being skeptical, but people love to talk about how open source is more secure because it has eyeballs on it. But now we try to pull the "it's not secure because the source was given to the NSA"
Source availability helps in crafting successful exploits in the real world.
Providing the source of a closed source project to an adversary so that they can break your software is nefarious.
If the author's justification was...
"My crypto implementation is perfect. There are no bugs in my code, so providing source code (which the user community does not have) to an adversary does not have any effect on any user's security."
...then the author is simply incompetent, not malicious. But I am not inclined to accuse the author of such incompetence.
Most people who argue that open source is equally or more secure than close source software do not argue that access to source code provides zero benefit to attackers. Instead, they argue that the benefit gained by the many additional sets of eyes belonging to researchers and other "good guys" outweighs the detrimental effect of letting the "bad guys" see as well.
The question in this instance is not whether the closed-source-superior or open-source-superior security camps is correct. The only related question is: "Does access to source code provide any benefit whatsoever to an attacker?"
I don't think I'm the "only person around here" who would argue that yes, it does. And that if you're taking the closed-source route, like this project's author, you can't provide copies of your source code to one specific attacker when they ask. If you do that, you're _definitely_ worse off (from a security perspective) than the people in the open source camp. I'm sure the closed-source-superiority people would agree.
Not necessarily - the normal argument for open source being secure is that the source code can be reviewed by the community and any security issues will be noticed and patched. In this case the parent specifically mentions source code the community doesn't have access to.
Your argument is that withholding source code raises the cost of attack for the NSA. They'll have to buy expensive decompilers, hire competent assembly/C programmers, it'll take longer, etc.
Seems like it would similarly be unethical for a local bakery to give the NSA a discount on cheesecake.
The NSA does a lot of things. A lot of them are good. Some of them are bad.
When the NSA asks you to help them do bad things, you say no. If you feel inspired to give them a cheesecake discount because of the good things they do, that is fine.
I'm pretty sure if the NSA asked him to add a backdoor, he wouldn't do it. (They already asked if he had an existing one, and he clearly realized how bad that would be if there was.)
That may well be true. But still, this story influences my _guess at the likelihood_ that this author would do so in the direction of "more likely."
Likelihood that the authors/owners of software are willing to cooperate, and capable of cooperating, with adversaries is an important metric for comparing options. Very high profile open source maintainers are less capable, due to the oversight of the community. Anarchist- or security-absolutist type personalities are less likely. The author and his project don't fit either bill.
I'm not arguing that this is the only metric that matters when considering one's options. But it does matter.
My understanding is that he may not have a choice. With a combination of an NSL and the all-writs-act, the US government can likely compel any non-FAANG domestic entity to do nearly anything short of murder.
For sure. I don't think it's about that, though.
Even if the application in question were open source, if the project lead is willing to cooperate in any way their government asks, they could probable ensure the existence of a back door. For this reason, where possible, I would prefer to use encryption software written by people who are principled to a fault (or who at least do a good job acting as if they were).
Let's imagine Linus Torvalds or Greg Kroah-Hartman in this same situation. Linux source is available, so let's say they were asked to ensure that a certain patch to a cryptographic API was not accepted before a certain window. Maybe the crypto API maintainers were on the call as well saying that they were on board with the plan (apologies to those people, I don't know you and mean no offense). I like to think that they would:
1. Turn down the NSA.
2. Attempt to get the word out about what they had been asked.
3. Find new crypto maintainers.
And yes, it's entirely possible that this is not at all how it would go down. Maybe they would be very cooperative. I don't know any of these people personally. But what I do know is that they have not, as of yet, made a blog post about that time when the NSA did ask them to betray an unspecified user and how they did everything they asked without resistance.
I don't think I'm being idealistic here. Software and encryption are global endeavors. People who blindly believe that the enemies of their state are also their enemies, or even that obedience to local laws is a moral imperative, should not write crypto software. Or at least, I hope they make their beliefs known like this guy did so that I can avoid their software.