But source code can also lie to you. To really understand what the application is doing, you need to do what security auditors do irrespective of source code availability. Namely:
1. Disassemble the application binary.
2. Debug the running application.
3. Observe the network traffic.
Here's another thing to think about. Suppose the source code were available. How could you trust that the source code provided matches the compiled binary running on your phone? You would need to perform the above steps to verify.
Of course there's the legal aspects of RE, which often dissuade people from even thinking about or discussing it, but I think that just telling people they could if they really wanted to discover exactly what their software was really doing is already sufficiently empowering. No doubt there would be plenty of opposition to this... which would primarily be from the proponents of DRM and the like, who very strongly want software (and hardware) to be treated as "black boxes". But it is, at least with general-purpose computers, relatively difficult to stop people from examining them, and even more difficult to tell if they did --- which is why I think this knowledge of RE is truly liberating.
Stallman's story is also worth mentioning here: https://www.gnu.org/philosophy/right-to-read.html
2) show me somewhere in the app that I'm using end-to-end encryption. This should teach their users that they are using end-to-end encryption, and it would make many more angry if they find out they are "lying" if they're using that interface but without actual end-to-end encryption behind the messages
Unfortunately, Whatsapp hasn't even made an official statement about adopting end-to-end encryption. Only Moxie has. So right now Whatsapp gets all the benefits of the media attention from "using end to end encryption", without having to hold themselves to that promise at all ("hey - you didn't see us say anything anywhere about using end-to-end encryption, did you?!").
So I don't necessarily want to see the source code, but at the very least they should publicly and legally commit to using end-to-end encryption.
You might have to perform step 1 , if the source code had compiled third party libraries, which might contain hidden routines to perform malicious operations (such as decryption).
1. having a public repo of the official source code
2. hashing the executable you have on your phone
3. the repo publishes a hash of its compiled executable release
4. the repo approves or reject the binary, based on the decompiling of the application and its comparison with the original source code
5. your os cannot run code that is rejected, or will just warn the user
don't know if a digital signature could be used somewhere in there.
would be much simpler to let system download source code and compile it instead of an executable I guess, but that would require to have much lighter libraries...
But that's a lot of ifs, and doesn't prove there's no backdoor that's currently disabled.
The actual apps could carefully perform the E2E encryption, but Whatsapp could easily MITM the data if (say) requested to by an outside agency, without the app being any the wiser.
It's impractical to verify - you'd have to have the source to Whatsapp's servers, guarantees their SSL keys haven't been compromised, etc etc etc.
Disclaimer, co-founder here.
In short: it's not Facebook's or WhatsApp's fault, but they're forced to cheat if there is the requirement from US officials.
While there may be E2E encryption in WhatsApp, there is no way to get it trustworthy.
Of course the government will try to threaten them with NSLs or tax audits or whatever, and Whatsapp could cave, but the law should be on their side.
they just pay them to include the additional primes in the public key system. users are still more or less secure. not just whatsapp. pretty much all the public key systems use it. it's still effectively 1024 bit plus keys for everyone but the nsa and ghcq.
latest snowdon leaks include everything you need for confirmation now you know what to look for.
this gives them the key used for the encryption and allows them to decrypt the messages. but still makes it hard for everyone else to decrypt (since they don't know the nsa primes).
no I'm not going to share the keys they use.
OpenVPN. SSH-2 with RSA keys.
What proprietary software with good track records did you have in mind?
OpenVPN is built on OpenSSL and was Heartbleedable.
Until a few years ago, SSH was a fiasco. Cryptographically, it has approximately the same security track record as SSL. It's also not a messaging system.
I didn't say I had a closed-source alternative for you. There aren't good answers here. I like TextSecure. I also like GPG, a lot. And I have a 4-figure bet with Matthew Green that OTR is more resilient than the other messaging systems. But OTR is mostly only OK if you don't use it with an actual chat client; once libpurple is in the picture, nothing is OK anymore.
"approximately the same security track record as SSL"? i'd say heartbleedable (openssl ssl) vs not heartbleedable (openssh) would be a rather incorrect approximation.
also, a messaging system could be tunneled through ssh.
However, can we really be sure when we have the source? I don't think so. The codebase is likely to be large, especially when you start looking at dependencies such as the crypto libraries they may be using (unless you want to assume they are safe themselves), and it has been shown that humans are actually quite bad at finding vulnerabilities in code that is written to obscure its real purpose.
The Underhanded C Contest is a yearly contest that puts this to the test. Participants are given a spec for a small piece of software, and must write a program in C that appears on code review to work correctly, but in fact subverts the requirements in some way. This has been remarkably successful.
Sure, having the code is better than not having the code, but I think that gives us less security than many assume it does.
The source and a lot of effort, or just a tremendous amount of effort.
I'll try it tomorrow might even write about it here or somewhere depending on the results.
The Problem with closed source software is an will always be, that we can never be certain of it's security (at least not without reverse engineering every version and fully understanding it).