Actually, it's worse than that. Even if you can review the source code, you STILL won't know for sure. As Ken Thompson put it three decades ago, "no amount of source-level verification or scrutiny will protect you from using untrusted code." This is because the tools we all use to process or transform source code (e.g., already-compiled-to-binary compilers, assemblers, loaders, etc.) may already contain secret back-doors. Ditto for even lower-level tools like hardware microcode.
While open-source software tools -- and open hardware too -- are much less likely to have secret back-doors embedded in them, there are no 100% guarantees. Ultimately we have little choice but to trust the people who create, package, and distribute all the software and hardware we use but didn't create ourselves from scratch.
EDITS: Added last paragraph. Made minor edits to second paragraph so it more accurately conveys my thoughts.
If you're paranoid, use software written in the past 5 years and compile it on a system running FreeBSD 4.1 off a CD which you've had in your closet for the past decade.
I doubt such a thing is possible given the current state of the art in program analysis. Even if it were possible, it's hard to imagine it being undetectable (e.g. it would probably cause massive slowdowns in compilation).
> If you're paranoid, use software written in the past 5 years and compile it on a system running FreeBSD 4.1 off a CD which you've had in your closet for the past decade.
If you have a trusted compiler, you can bootstrap any non-trusted one . That said, I suspect implementing diverse double-compiling to build any nontrivial system would take significant work.
Really? Why wouldn't they? That's one of the most valuable attack vectors, precisely because it's so difficult.
I wouldn't have guessed the NSA had people smart enough to break Windows Update's encryption using a brand-new cryptographic technique that also required several hundred thousand dollars of machine time to execute, but it happened nonetheless.
We have to start thinking ahead, e.g. by making deterministic builds a standard procedure: https://blog.torproject.org/blog/deterministic-builds-part-o...
If this logic made any sense then we should just give up, as their megabit-scale quantum computers have already cracked all keys on the planet.
My guess is you forgot the bit of the sentence you clipped.
So we need to start implementing deterministic builds into every major open source project if we're even pretending like we care about putting up resistance to what's going on. If Tor browser can do it in just a few weekends, then so can we for mainstream Firefox, and hopefully eventually Ubuntu.
That's rather unsettling. Do you have any further information on this?
Why are you finding it unsettling? I think that's exactly what the NSA do: stay ahead of everyone else and take advantage of what they know. In this case, a different MD5 collision attack technique was invented by Marc Stevens at about the same time frame, so you couldn't even say that [whoever wrote Flame] was ahead by a lot.
I have found it more interesting that they knew about the Microsoft design errors that they exploited to break the update mechanism. And, of course, wondered if the design errors were not forced.
I'm just trying to understand how it would work.
How are deterministic builds better than distributing binaries with hashes for verification? Just because I don't have to trust the original author's compiler?
Indeed, that's precisely why it's such an important protection mechanism. But it's about more than just not having to trust the original author's compiler. The original author might maliciously copy-paste some additional source code into the build process just before compilation.
Deterministic builds are a way for any of us to download any source code, build it, and verify that all of us are using a binary derived from exactly that source code, and nothing else.
Cool. Thanks for the explanation.
My first thought is that the next attack will be analogous to fake Amazon reviews. "This binary with the hash e99a12d388afa2fa5fdde8ed3bcbe055 gets five stars! After using 51fc8eff10b2fccec9890fe5d1b0cfd9 for years, I've realized that e99a12 is far superior in speed and robustness. I recommend you upgrade today!"
You're assuming that because the NSA would like to have such a capability (as would most programmers), that they actually do, because they have abundant money to throw around. Likewise, I'm sure the Air Force, Army etc. would all like to have antigravity generators, and they have abundant money to throw at the problem, so they must have them, right?
Of course not. 'A compiler that adds backdoors to software' implies a compiler that knows which routines are for security and which are not. How exactly is it to distinguish between
And that's before the other obvious objection that looking at compiled code in a debugger/disassembler is going to reveal lumps of code that were not put there by the original author. Invisible on a sufficiently large project? Sure, but an encryption/decryption program doesn't need to be very large to begin with - all it has to do is to reliably transform a block of data into scrambled form and back again. This is more amenable to proof than most computer programs (not least because it has no need to be interactive). Furthermore, we can easily imagine test cases that are very very short; ROT-13 is a lousy cipher, but it is a cipher, and one that can be implemented in ~20 lines of (non-obfuscated) code. Now suppose we make a variant that asks the user how many places to rotate by (eg 14) and that number functions as our 'secret' key. Still hopelessly insecure to anyone over the age of 10, but what of it? Wouldn't your hypothesized 'insecurity demon' need to put a backdoor in anyway, because it is an encryption tool, be it ever so primitive? And wouldn't that block of code show up in a debugger? If your answer is no, you're now positing hidden functionality that not only divines programmers' intentions and subverts those that are intended to add security, but also looks at the quality of the security algorithm and only sticks in a backdoor if it passes a certain threshold of cleverness.
I wish compilers had those smarts built in! Think how much tedious/craptacular code could be automated away by simply labeling things as 'sekrit!!' and having the NSA module inside the compiler generate lightweight, reliable code with no real penalty! It would take the pain out of unit testing for ever!
Timing attacks for instance, could plausibly fall out of optimizations that terminate a loop when it's clear that the value being computed won't change (e.g. is false and is repeatedly getting anded with things). This would probably be even worse for power consumption or other side channels. For a potentially easier to measure side channel, you might try introducing some state-dependent delay (short and caused by something like that loop optimization) in a bit of code preceding a packet send.
Alternately, introducing (via incorrect optimizations) the right kinds of buffer overruns or race conditions that corrupt a pointer just right, could get you a nice stack smashing exploit with certain (very abnormal) inputs -- and remote access to the machine in question.
The attack to be worried about goes like this: You build "Firefox Setup 23.0.1.exe" and intentionally insert a backdoor into the setup process. You make sure the setup process appears to function exactly the same as the clean installer (not hard). You then replace Firefox Setup 23.0.1.exe on various distribution websites with your malicious version. Or you MITM the distribution websites in order to send your malicious version in place of the one the user expects to be downloading.
Deterministic builds defend against that attack vector, while also defending against any hypothetical compiler-backdoor-autoinjector. You get both defense layers for free, just by using deterministic builds. This is a necessary step for the future, not an optional security layer.
Deterministic builds protect against compromised binaries of any kind, so we need to use it.
Unless I'm a spook (or group of spooks) any my binary includes a backdoor by design from the outset. Or unless I have backdoors built into the chips (a far more likely possibility than magic compiler demons). ISTM you're yo-yoing between treating the NSA as omniscient/omnipotent one moment and then holding up things like this as silver bullets the next.
Deterministic builds aren't a silver bullet. But they're an important defense layer.
So your ROT13 program would not be affected.
Oddly enough, that is almost exactly how the Flame malware worked.
It injected itself into a compiler, causing the compiler to silently compile malicious code into custom firmware being compiled for nuclear reactor control equipment.
So at least one state actor has done just that at least once.
As for using FreeBSD 4.1 off a CD in your closet, I recently set up an NT4 machine with SP2 (requirement!) that was to be used entirely offline. Surprisngly it booted and works fine on a brand new Intel chipset H61 based machine with an SSD in it (graphics are stuck at 800x600x16 but that's a requirement too).
Consider, for example, hooking all fclose function calls and testing on every call whether
* you have write permission on the file,
* its an object file,
* its in an architecture your exploit supports,
* it uses the fclose function (or the corresponding system call, if it is linked statically)
* and your exploit is not already present.
If those conditions are true, hook the fclose calls in the object file before actually closing it, otherwise just close it normally.
Of course this does not work if the checksum is calculated before the file is closed by your backdoored compiler.
Given that statement and the insinuations that the NSA had folks participating on standards discussions basically gumming up the works for things like IPSec, NIST standards, etc (effectively crowding the people who know what they are doing out of the room), it's likely that they're created circumstances where accepting defaults or not really understanding how to implement various technologies creates attack surfaces or weaknesses that the NSA has known means to exploit.
Like anything else, you need to think about the risks and controls for whatever your doing. If you're protecting the interests of a government or company likely to be spied upon, you probably want to factor in the ability of a nation-state actor to potentially intercept data into your risk calculus and operational strategy.
For the rest of us, you need to think about what the risks really are -- if you're a politician or other visible individual, you need to be very mindful of your communications privacy and practices. (Google "Anthony Weiner"). If you're posting on Hacker News and buying stuff from Amazon, the NSA decrypting that is a low-risk threat.
Even if you don't fully trust both compilers, the 'diverse-double compiling' is still useful:
"Finally, note that the “trusted” compiler(s) could be malicious and still work well well for DDC. We just need justified confidence that any triggers or payloads in a trusted compiler do not affect the DDC process when applied to the compiler-under-test. That is much, much easier to justify."
It's probably possible to verify closed-source binaries by decompilation and a lot of effort, if you wish.
But on the other hand between compilers/hardware/OS/etc. the part of the computer that is most susceptible to being attacked (the application software) is the part that's easiest to audit with open source.
If your chip's built-in RNG is cracked you can simply refuse to use it.
But if your libopenssl.so or CRYPTAPI.DLL is cracked you're probably screwed. It's at least somewhat straightforward to build your own libopenssl.so (or at the very least, verify that your distro's signed package RPM builds the same sha1sum from the known-good source tarball + patches). There's nothing you can do to independently verify the source used for CRYPTAPI.DLL, and even the binary of that DLL requires trusting Microsoft (and figuring out which Service Pack, KB patchset, etc. are installed).
However in the end you do have to trust people no matter what you do, so if you're willing to trust FooCorp's binaries then just go into decision with eyes open. Even most open source users are effectively no different in that they simply trust what Fedora or Debian are pushing out there.
But even one alert user is enough to catch a hacked .deb or .rpm, so you don't need most of the users to be paranoid. It's kind of a reverse herd immunity, if you will.
If you are talking emacs, or the kernel, or gcc, there is probably enough interest there (and strong personalities in charge of the projects) to keep things safe.
I don't know where the cutoff is. At thousands of users you probably can't count on someone else having looked through it. At millions you can.
However you don't have even the possibility with closed-source products. That means you must instead rely on the process used to create the software. So something by MS might actually be pretty trustworthy from a "security issues not accidentally introduced" standpoint at this point after years of improvement on secure coding practices.
But they still operate by a profit motive, and there's still the possibility of deliberate introduction of security issues when it suits business purposes or for legal compliance.
Unfortunately there's no great way to tell that an OSS project from RANDOM_DEV is well-coded without looking at the code and it may be that a well-intentioned but junior dev introduces a relative swath of security bugs by accident.
I don't know where the cutoff is either. It may depend more on your threat model than anything else.
In a similar vein, would it not be possible to spot hardware backdoors in PRNG by running many identical encryption tests across many different types of hardware looking for one that doesn't behave like the others?
More generally, what tools exist for the automatic detection and mitigation of backdoors?
This was, of course, very expensive, and made software changes a very difficult and slow process.
Of course this rules out all 'cloud' software other than simple dumb data storage. RMS was right again.
 "Cloud computing is a trap, warns GNU founder Richard Stallman" http://www.theguardian.com/technology/2008/sep/29/cloud.comp...
Mostly the same reason they rely on third party infrastructure like electricity and plumbing: economics.
Cloud services has none of that, and can cut off service for any reason. Work on any competing services that "the company" dislike, or do anything disruptive to people in power, and instant see your future, your business, you speech be taken down to a 404 not found. Any logs, any files, any property at all confiscated. Even pure money is not safe from "indefinite suspension of the account".
Maybe this is a lesson that if you ever want to legally produce a scam, steal, or do anything that normally would land you in jail, all you need to do is go into producing third-party digital infrastructure. There is basically nothing a cloud service can do that would lend the owners to go to jail except tax evasion.
There was not a time when the mass population had their own generators, then flocked to supplied electricity in preference. They never had power independence.
Water was originally done locally, rivers and what not, but hygiene and health became a game changer. The benefits were huge and clear.
The whole history of all three are so different, you cant reasonably compare any of them.
In-house computing infrastructure is very expensive and most firms have their comparative advantage in doing Something Else, so it makes much more economic sense to run on AWS or somesuch instead of buying a pile of physical machines and running their own server farm. Likewise, many web and mobile services provide huge benefits for their users via network effects. I personally don't enjoy using Facebook enough to maintain an active account, but many millions of people clearly do, and I can't say they're wrong for choosing to spend their time that way.
What hosting solution(s) your own company use?
We use Google Apps, Trello, Wave (for accounting), Corona SDK (that builds in Corona cloud servers), Google Drive (previously was Dropbox), Google Docs (we never used any office here, MS, Libre, or anything else), and so on...
I am personally against it, but the CEO decided it that way because he did not wanted to bother about IT infrastructure, we are not big enough yet to bother about it, cloud stuff make things much easier.
BTW, IT infrastructure? You are all networked, right? So, whats left? One large ish server? Hardly my idea of "infrastructure", which to me implies something a bit more large and diverse. IMHO, being networked for the internet means you already have infrastructure, ish!!!
We don't even have a place to put a server... We already freak out enough fearing people will steal the workstations and test devices! (we are in Brazil, last year there was 20 mass robberies, 90 cops executed, several random murders, and 3 different incidents resulted into homeowners ending in hospital after being shot with military-grade rifles in common robberies)
My last employer was not into 'the cloud'. Just a smart manufacturing company, with sites in five countries, two data centers, one on each continent.
Which would die in about a day without the internet.
Each site could chug along for a while, but without the bits flowing in, and out, coordination gets out of whack and after about three workshifts they'd have no idea what to work on next.
Cloud or not, if the internet goes so goes the business.
That or they'd ask us to re-install those doggone fax servers we retired in 2007.
In any case, if your servers are running but you can't connect to your suppliers and customers - you might as well turn them off, your business likely can't function without connectivity even if you avoid the cloud completely; so avoiding cloud doesn't decrease your risks much.
Just buy two or more redundant, reasonably different internet channels - and then prolonged loss of connection is no more common as other major downtime risks such as flooding, fires, etc.
Remember that video that was posted of a Lawyer given a talk to law students about never speaking to policemen? Remember how innocent things can be completely twisted and used against you?
Yeah, I'd worry about the NSA snooping in my calendar.
Unless the government puts explicit restrictions (not secret restrictions!) on how their resources (not just stored data!) can be used and actively punishes those using it improperly, then it should be assumed that all the resources at the NSA's disposal will be used for the convenience of the rest of the government.
FWIW, I'm not even from or in the US of A, but I'm worried about my government doing something exactly like that considering how much it's cooperated with the NSA directly in the past and present.
And how will they be treated? It seems: poorly.
sharing your calendar can be twisted against you.
not sharing your calendar can also be twisted against you ("you don't seem to be sharing your calendar like most of your fellow citizens, what are you hiding?")
Perhaps we should exercise our little grey cells more. Granted for a busy sales man or something that's impractical, but Im sure we could generally commit more exclusively to memory..
I posted this before, but to repeat... I think the legacy here is that we move more and more as a matter of habit or routine to a more secure way of living. No, we cant ever fully insulate ourselves from the NSA, but I think more and more of us will commit less to the internet, choose HTTPS via automatic extensions, use more secure software as a default, self censor, and so on.
BTW it's a major trope in spy movies. Literally dentist appointment in "The Tall Blond Man with One Black Shoe" ("Le Grand Blond avec une chaussure noire").
(I say this from a position somewhere in-between: I was born in British Columbia, and my birth date is right on my birth certificate, but wrong on the public-health-plan card you also get issued at birth. I had a very strange time bootstrapping my first credit card...)
I also have my adoption papers, and I left the country I was born in when I was young (7 years). As far as the country I live in is concerned I'm my Father's last name, but have legitimate primary ID with my mother's pre marriage last name.
Always wanted to get my spy on and build up a second identity with that. Of course that's highly illegal, so itw only ever been a fun thought exercise :)
It is possible to make cloud services where trust is not needed. It remains to be seen if cloud services will have to change to remain in business.
But then where do you get the software for said server? Or your mobile client? We're back to the age old question of how do you trust the compiler? How do you trust the source code?
There's a reason that Governments that take security seriously have their own, private, WAN infrastructure that has gateway upon gateway before it reaches the internet, if it ever does.
When my laptop had water spilled on it, there was no need to worry about my thesis. I went to a lab and started working on it again 10 minutes later. No work lost. My own backup solutions would be far worse, and likley involve losing some work.
When using these services, I have to ask myself, how much do I value my privacy? I think I get more from google's services than I lose. For me, and a lot of people, it's not worth the time to set up an email server, let alone maintain one.
I'm not trying to change your mind; not everyone is meant to run their
own systems. But I certainly want to counter your discouragement against
Heck. I really could host everything at home but the mere thought of the periodic hassle when some hardware or software inevitably breaks... I'd rather simply send all my email archive to NSA, KGB and Chinese gov't, the hassle is really not worth it (for me).
This is pretty much the conclusion that I've come to as well. Using cloud storage to keep thing in sync, but only doing decryption on the client. Even doing something like storing a SQLite database and encrypting all of the fields seems like it would be leaking information rather than just encrypting/syncing the entire database.
An easier way for most people (Read: People who have never heard of cryptography. People who think 'DES' is a Government agency. People who think of an actual python when they hear 'python'. People who think Perfect Forward Secrecy means no one sees your Mom's embarrassing email forwards. There are a lot of such people, probably far more of them than software developers who speak C.) is probably to lobby for change in the draconian laws that authorize and encourage this sort of spookery. It is not a perfect solution, but in the long run, will probably make things easier for the average person.
The things the NSA is doing isn't putting in "IF password = 'Joshua" then LetMeIn()", it's subtle flaws in random number generators. The reality is that open source may be our best defence but it's still a pretty poor defence because of the complexity of what's been compromised.
The more I think about it the more I think the actual best thing you can do is to pick obscure technologies that were probably too small to warrant NSA attention.
I don't think you need to be a security or cryptography expert to find vulnerabilities the NSA might try to exploit.
To use a metaphor, it's great that you're making sure your windows are shut but the locksmith may have sold you a deliberately weak front door lock.
What people fear is the deliberate weakening of the algorithm such that it can be broken (implementations can be fixed, the real nightmare being subverted algorithms), where broken doesn't mean what many people think it means.
Say you've got a 256 bits key, to brute force it you'd normally need to try 2^256 combinations of bits, right? Well, if you found a flaw that permitted you to brute force it in 2^200 attempts, then that's a massive improvement and you can consider the algorithm broken, but guess what - that's still exponential complexity with the issue being solvable by simply making the key bigger. And there are people working on these standards that are not on NSA's payroll and outside the US jurisdiction, people that aren't idiots, so bigger flaws than this aren't feasible.
This is why, even if they've introduced subtle flaws in current standards, that doesn't mean they have the capability of breaking the encryption - e.g. it is possible that they are able to break RSA-1024 keys, but RSA-2048 is an entirely different problem. And RSA-4096 keys will likely stay unbreakable, unless a huge breakthrough happens.
People give them more credit than they deserve: yes, they have cash and authority and can coerce companies and individuals and they can also plan for the long term, etc, etc, but let's be realistic about their abilities.
Yes, but I'm assuming you have to be a medium-to-expert level programmer, generally in C, at least, correct?
What I'm trying to say is that that is probably not the case for most people in the world, or even in the tech industry.
This is the whole point - OSS offers the potential to be safer but if there aren't qualified people with the skills and expertise digging into every nook and cranny of every bit of code.
Ironically it's like the security services say - they have to win every time to win, the terrorists only have to win once.
The community have to spot and fix every exploit to secure the system, the NSA only have to get one in to compromise it - and these aren't like accidental flaws, they're things the NSA will know are there, know the extent of and know precisely how to take advantage of them.
On the other hand, NSA can simply demand the addition of a dumb IF just like you described in any proprietary software, and nobody will ever find out about it. If Windows XP contained backdoors in 2001, it may as well contain the same backdoors now.
Just because you can't get a 100% guarantee of safety, that doesn't mean that open-source isn't much safer.
And yeah, people exemplify with that embarrassing SSH flaw in Debian. Well, at least you found out about it, at least you know it was fixed.
Given that the assumption that most people would (reasonably) make is that OSS is more resistant to these things, if you're the NSA isn't that exactly what you'd target?
And given that we know that they've undermined standards, compromised hardware and so on, wouldn't it be naive to think that they'd not made significant efforts to infiltrate and compromise OSS projects?
I certainly agree safer, I'm just really not convinced that safer crosses the line to actually being safe.
Sigh. This sucks. I knew it was bad, I use encryption, I share only what I want to share, etc. And none of it turns out to really matter. The NSA's actions have made me as a non-US citizen or resident so wary of anything coming out of that country now :(
And with China having the same issues, the two big tech powerhouses... What's left? My own country (Australia) definitely would cooperate with these sorts of things too.
We are effectively controlled by tech dictators at this point, and no way out in the near future. This sucks.
The power asymmetry between the government and private citizens and companies is a matter of law not technology. Even if each person and company encrypts their data perfectly, they can still be compelled to give up the encryption key with the threat of jail or financial ruin.
I'm not saying that perfect technical security is not worth pursuing--it is, because it will force the matter into the legal realm. But if the legal realm isn't right, the technology by itself won't save us.
Actually, what gives me the warm fuzzies is that I can hire someone who knows what they are doing to read the code. So could a purely non-tech person. This can't occur with closed source software.
I understand the point about software, but what about hardware? If the NSA is corrupting software, then it must be corrupting chips too. How can chips be verified NSA spy free?
In principle, an analagous "Linus's Law" exists for open-sourced hardware. So if you have the code/design for chips, that could be verified as well.
But for a modern microprocessor, it's impractical to reverse-engineer the chip at this level. One solution would be the manufacturer could publish the full plans and circuitry for the chip, which would make it easier to see if everything looks okay in the circuit, and then you could verify that what's on the chip matches the plans. For instance, if Intel published the circuitry for their digital random number generator, you could look at the circuitry, look at the chip, and be pretty confident that everything is okay. (Note that this could be published without being open-sourced. In a perfect world, patents would include all this information.)
The problem is for chips that have microcode that can be updated, knowing the chip itself is "correct" doesn't help you, since the microcode could be malicious. Even if the random number generator is valid, the microcode could modify the values before you get them. My conclusion is that even if your chip is 100% validated, it doesn't help you at all if you don't know what's in the microcode.
(This all depends on what kind of attack is being put into the chip, of course. I wouldn't notice any side-channel attacks built into the Z-80 such as timing-based attacks or power consumption. You could probably sneak in something super-obscure like the carry flag not getting cleared in some corner case and make it non-obvious, but that would be pretty hard to exploit.)
Don't accidentally hardware RANDU as your PRNG, for instance.
But more generally, yes there is an issue of having an initial base system which can be trusted to some degree.
It could enable several different things, like running systems where most modules have access to much less resources of different kinds, allowing less mischief (microkernels, hierarchical resource mgmt frameworks like GenodeOS, etc).
Greater modularity would also mean that basic or common modules would need less changes, requiring new audits less regularly, thus decreasing that load on the community. Of course many vulnerabilities are the result of unexpected interactions between modules, but then a certain composition of certain modules could be audited and hopefully not require any changes for some time.
Being a software engineer I'm not kidding myself with regards to the huge software engineering problems inherent in achieving that, and of course the open source community already performs a lot of code reuse. Many will argue that an argument for more reuse is an argument against fragmentation and thus an argument against experimenting and forking. I guess that's true, but I still feel that more could be split and shared while still experimenting on many other things. There will also always be politics, personalities and the will to reinvent wheels for many reasons, like self-education or implementation pet peeves.
Also, different languages and/or runtimes/VMs and their differing suitability for different environments affect fragmentation greatly. We probably won't ever end up with a single "winner", no matter how much many C/C++/JS/Go/Rust/Haskell/ATS/theorem provers we go through, and for reasons (only some of which are mentioned above) we probably don't want to.
Dreaming is nice, though.
There are so many components to systems now -- and not all of them need to change all the time. What I imagine, and what you seem to be hinting it, is that we have to do is factor software into processes which vary by the amount of code churn. Instead of having 1000 modules being patched every week, have 500 modules which change every 5 years, 300 which change every year, ... and maybe 2 or 3 which change every day. This would let us maintain the pace of software innovation.
Unix does this to some extent -- think about when you would have a REAL need to upgrade "grep"? Almost never. It's basically hardware at this point.
Likewise, with the architecture of Chrome and the Quark browser yesterday on HN, you could just update the rendering engine, or the JS engine, which run in restricted processes. Browser kernel updates could be separate, and verified separately.
Related paper I found interesting:
We are moving closer and closer to "ubiquitous computing", and I agree with that paper in that software upgrade has to be treated like a first class problem. Right now even the best vendors are too sloppy about upgrades and modularity. I am constantly being nagged for updates on every device, and at work I am constantly dealing with shifting sands underneath my code.
Yes, factoring software into different processes and enforcing proper sandboxing and resource restrictions is definitely one aspect of it. Hierarchies of such processess where resource allocation can be delegated makes it even more interesting.
Another aspect is composing any one process of - where possible - shared, audited modules that haven't changed for a good while and that you're able to verify individually before building your binary. It requires proper code signing and delegates the problem to the domains of who to trust (and of course issues like Thompson's famous Trusting Trust) and once again I'm not kidding myself about the difficulties of that either. But that shouldn't stop us from working towards such a goal if we find it desireable.
This of course helps closed source software production, but even more important is for any individual or FOSS organisation to be able to build their own system of (a myriad of) components that can be cryptographically verified to not have changed since some specific audit event, provided that these signatures of single modules and signatures of compositions of modules can be kept by a public and decently trustworthy actor or preferrably actors, like Linux distributions or FSF or whoever. In a world where the loyalties of individuals are easily purchased we need to keep each other honest.
This extends from Joe building his own doubly-linked lists and red-black trees to crypto modules that take side-channel attacks into account to kernels and drivers and everything. Many will argue that layers and abstractions can kill software with complexity. I certainly agree that they can if used poorly (or, quite frankly, if not used with great skill) but they are also one of the only things that can make complex problems manageable, and - as you say - our systems are now hilariously complex.
Well. It's probably easier to break into your home, put an eavesdropping device and get your keys. I think it's important to audit code but let's be realistic, you won't defeat an agency that has you on your radar because you use "open source software".
And by the way, Linus Law doesn't make any sense, simply because some bugs cannot be seen just by looking at the source code and also because the bandwidth between eyeballs that don't belong to the same brain is extremely limited.
If they were only targeting one person, maybe. But getting a back door added to a piece of widely-used software is much easier than bugging the homes of everybody who uses that software.
I agree with you in the sense that software should be made as secure as possible, my comment is just a remainder that one should be realistic about what level of security he can get.
If you didn't audit the hardware your software is running on and your local isn't physically secure spending so much time on software auditing isn't very useful.
You'd even have to thoroughly double check that the software you have it the one you expect, audit your compiler, audit your os and recompile everything you have.
You know better than me that there is no such thing as absolute security, so you're secure relative to a menace. If you try to secure yourself against a 1st world intelligence agency, I say... "Good luck". ;)
The point is not to create obstacles to criminal investigations but to thwart massive online private data collection.
I'm taking all private communication completely off the internet to minimise my exposure - I do not have enough knowledge or experience to make decisions on cryptography, nor sufficient need to waste time on this.
Maybe a private mesh network setup through line-of-sight directed antennas?
All emails or SMS or phone calls have to be just stuff like "Hey, how are you are doing? Do you wanna meet up and have a beer."
2048-bit is nothing. Here's why:
1. A great mathematician can find shortcuts. Tons of great mathematicians working in parallel will find them faster.
2. A fast processor can eventually brute force a key. Millions of fast processors working in parallel will find it faster.
You are talking about a well-known method with a determinate algorithm. May as well just hand them a cake, because they are going to eat that up.
Why use encryption mechanisms they've trained us to believe in and use? And how could be beat them at a game where they hold all of the talent and power? The only answer is to have dark encryption methods that are not shared, and that change frequently, frequently enough so there is no group of mathematicians with millions of servers that could determine the constantly changing algorithms.
I think these bug bounties are interesting. Is there something like a market price for one of these bugs? I.e. are other people out there besides Colin willing to pay for bugs in the tarsnap client?
Certainly I'm sure you could find some people who'd be prepared to pay for a valid Knuth error, although exactly how you'd advertise/seek those who have such things would risk wrecking the value of the thing you seek to acquire.
Stepping up from that, there are clearly existing markets for exploits, and you might be able to sell potentially exploitable bugs in the same way.
How do we know you're not (and have not been) selling tarsnap data to a corporation or non-US intelligence agency? This has always been a possibility and it has roughly equal probability to you being an NSA agent even after the Snowden leaks.
You assume he is, then you read the source of the client that is sending the data, and verify that it's not. You trust the code, not the person. That's the TLDR of the article.
someone who doesn't seem generally surprised that the NSA are involved in espionage of all things... :)
"Despite all the above, it is still possible that I am working for the NSA, and you should not trust that I am not trying to steal your data."
now if you can just get people to pay attention to issues that result in dead children and reduced quality of /real/ life to people instead of overhyped and questionable civil liberties violations... XD