Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How can I verify that WhatsApp uses E2E encryption?
58 points by Inc82 on Jan 1, 2015 | hide | past | favorite | 45 comments
Everyone seems thrilled that WhatsApp has announced they they have switched to E2E encryption developed by Open Whisper. Is there a way I can verify this is happening?



Most of the comments so far focus on the fact that WhatsApp is a closed-source system. And just to be clear, it would absolutely be better to have source code. Source code gives you a 1000-foot view of the application and lets you spot obvious problems quickly.

But source code can also lie to you. To really understand what the application is doing, you need to do what security auditors do irrespective of source code availability. Namely:

1. Disassemble the application binary.

2. Debug the running application.

3. Observe the network traffic.

Here's another thing to think about. Suppose the source code were available. How could you trust that the source code provided matches the compiled binary running on your phone? You would need to perform the above steps to verify.


This is why I think reverse-engineering needs to be far more common knowledge; maybe to the extent of being required in a CS curriculum just like going in the other direction (from source code to low-level CPU operation) often is. There is far too much blind trust in things at the "lower level", like compilers and libraries, and while it's much harder for the average person to reverse-engineer hardware and verify its operation (requires specialised hardware too), with software it is relatively easy and should be something that every programmer should know at least a little of.

Of course there's the legal aspects of RE, which often dissuade people from even thinking about or discussing it, but I think that just telling people they could if they really wanted to discover exactly what their software was really doing is already sufficiently empowering. No doubt there would be plenty of opposition to this... which would primarily be from the proponents of DRM and the like, who very strongly want software (and hardware) to be treated as "black boxes". But it is, at least with general-purpose computers, relatively difficult to stop people from examining them, and even more difficult to tell if they did --- which is why I think this knowledge of RE is truly liberating.

Stallman's story is also worth mentioning here: https://www.gnu.org/philosophy/right-to-read.html


I would at least like them to:

1) state in their Privacy Policy somewhere that they are using end-to-end encryption and they can't see the messages - this means that if they change that later and get caught, they could get in trouble with the FTC

2) show me somewhere in the app that I'm using end-to-end encryption. This should teach their users that they are using end-to-end encryption, and it would make many more angry if they find out they are "lying" if they're using that interface but without actual end-to-end encryption behind the messages

Unfortunately, Whatsapp hasn't even made an official statement about adopting end-to-end encryption. Only Moxie has. So right now Whatsapp gets all the benefits of the media attention from "using end to end encryption", without having to hold themselves to that promise at all ("hey - you didn't see us say anything anywhere about using end-to-end encryption, did you?!").

So I don't necessarily want to see the source code, but at the very least they should publicly and legally commit to using end-to-end encryption.


Well for the reader, here's an interesting recent talk related to your last note: http://media.ccc.de/browse/congress/2014/31c3_-_6240_-_en_-_...


It the source were available, I could compile it into a binary myself and calculate a file checksum and compare it against the binary downloaded from the App Store. Obviously not the case here, but i don't think you'd need to "performs the steps above" to verify.


And you would almost certainly not get the same checksum unless they provided you with the exact toolchain they used (which almost nobody does) due to compiler version mismatches or some such.


Even if you used exactly the same toolchain you would not get the same checksum because the binaries will probably have some other information embedded, such as timestamps.


"deterministic builds" is actually a pretty hard problem with most existing compiler infrastructure. Bitcoin and Tor use a system called Gitian: https://gitian.org/


Having a source code might help you in skipping the first step(decompile and disassemble), but you have to perform step 2 and 3 to analyse the behaviour of the program.

You might have to perform step 1 , if the source code had compiled third party libraries, which might contain hidden routines to perform malicious operations (such as decryption).


So what would be the next step be when (not if) they don't match?


mmh

1. having a public repo of the official source code

2. hashing the executable you have on your phone

3. the repo publishes a hash of its compiled executable release

4. the repo approves or reject the binary, based on the decompiling of the application and its comparison with the original source code

5. your os cannot run code that is rejected, or will just warn the user

don't know if a digital signature could be used somewhere in there.

would be much simpler to let system download source code and compile it instead of an executable I guess, but that would require to have much lighter libraries...


Was reminded of: Reflections on trusting trust - Ken Thompson


I suppose, if they published the algorithm they claim to use, and if you can stub out the app's random number source (say LD_PRELOAD or the like), and if you can sniff the app's network traffic (again, LD_PRELOAD might be necessary if it's encrypted, assuming they're not using a statically linked SSL library), and if they don't perform one of any number of trivial modifications to the algorithm (such as adding a fixed salt), you might be able to, for a given message, confirm that for that message, they encrypt it identically to how the algorithm they claim to use would encrypt it.

But that's a lot of ifs, and doesn't prove there's no backdoor that's currently disabled.


I think a lot of people here have missed the point a little. It's very easy to subvert E2E encryption of this sort, because no Whatsapp user has any way of verifying that they're talking to another Whatsapp user beyond the Whatsapp servers saying so.

The actual apps could carefully perform the E2E encryption, but Whatsapp could easily MITM the data if (say) requested to by an outside agency, without the app being any the wiser.

It's impractical to verify - you'd have to have the source to Whatsapp's servers, guarantees their SSL keys haven't been compromised, etc etc etc.


If WhatsApp integrated everything they had in `Axolotl` [0] protocol specification, this wouldn't be the case. If WhatsApp would attempt to MITM, they would have to know someone's private-key (or break the crypto) which never leaves the phone, hence the E2E property.

[0] https://github.com/trevp/axolotl/wiki


It's always possible to MITM if you can't verify the signatures. No matter what they implement, the server can just relay messages back and forth.


You don't need to find someone else's private key. Just create your own private key and convince people to send stuff to it.


If you want to easily do traffic inspection and forensic analysis of stored data for iOS and Android, you can check out the free Community Edition of our mobile app testing lab [1].

Disclaimer, co-founder here.

[1] https://www.nowsecure.com/apptesting/community/


You cannot verify that WhatsApp isn't cheating without a source code analysis. And it's even worse, WhatsApp is a doughter company of Facebook, so WhatsApp is falling under Section 215 US Patriot Act.

In short: it's not Facebook's or WhatsApp's fault, but they're forced to cheat if there is the requirement from US officials.

While there may be E2E encryption in WhatsApp, there is no way to get it trustworthy.


Legally, if they indeed enabled E2E, the government shouldn't be able to force them to disclose the data. CALEA says you should decrypt the data for the government only if you have the keys. But with Axolotl E2E encryption, they're not supposed to have them.

Of course the government will try to threaten them with NSLs or tax audits or whatever, and Whatsapp could cave, but the law should be on their side.

But before we get there they actually have to put it in their privacy policy that they are doing that, so then they can show the judge later that they've legally committed to a certain level of privacy for their users.


it's not a case of threatening anyone.

they just pay them to include the additional primes in the public key system. users are still more or less secure. not just whatsapp. pretty much all the public key systems use it. it's still effectively 1024 bit plus keys for everyone but the nsa and ghcq.

latest snowdon leaks include everything you need for confirmation now you know what to look for.


the way it works is quite straight forward. the rsa key is the product of 4 primes rather than two. two of which are known to the nsa. this shortens then rsa key length to 128 bits which is factored using a sieve on their hpc platform in a fraction of a second.

this gives them the key used for the encryption and allows them to decrypt the messages. but still makes it hard for everyone else to decrypt (since they don't know the nsa primes).

no I'm not going to share the keys they use.


Basically, you can't. And this is precisely why open-source is so important for cryptography-, security- and privacy-related purposes.


You can't. WhatsApp is proprietary software that you aren't allowed to audit. Use free software chat programs instead.


How can I (developer, no security knowledge) verify that <open_source_alternative> does encryption properly?


You should assume it can't. The track record is very, very poor.


That's a funny statement, considering PGP and OTR is the go-to and has held up for all of these years.


What's the next example?


Truecrypt, also confirmed by NSA documents published by Der Spiegel to be 'catastrophic.'

OpenVPN. SSH-2 with RSA keys.

What proprietary software with good track records did you have in mind?


Truecrypt isn't a messaging system, is barely open source, and is barely trusted (though I think that's unfair). Compare, on the other hand, to "real" open-source disk encryption projects like EncFS/Ecryptfs.

OpenVPN is built on OpenSSL and was Heartbleedable.

Until a few years ago, SSH was a fiasco. Cryptographically, it has approximately the same security track record as SSL. It's also not a messaging system.

I didn't say I had a closed-source alternative for you. There aren't good answers here. I like TextSecure. I also like GPG, a lot. And I have a 4-figure bet with Matthew Green that OTR is more resilient than the other messaging systems. But OTR is mostly only OK if you don't use it with an actual chat client; once libpurple is in the picture, nothing is OK anymore.


how is SSH a fiasco? i'd love to read more about that.

"approximately the same security track record as SSL"? i'd say heartbleedable (openssl ssl) vs not heartbleedable (openssh) would be a rather incorrect approximation.

also, a messaging system could be tunneled through ssh.


How about investing money and/or developer hours into securing libpurple then? :)


Look for reviews by reputable security experts. Look for use of a standard cryptographic library, preferably a high-level hard-to-get-it-wrong library, rather than hand-rolled cryptography.


Several people have already raised the very good point that ultimately, we need the source code to be certain.

However, can we really be sure when we have the source? I don't think so. The codebase is likely to be large, especially when you start looking at dependencies such as the crypto libraries they may be using (unless you want to assume they are safe themselves), and it has been shown that humans are actually quite bad at finding vulnerabilities in code that is written to obscure its real purpose.

The Underhanded C Contest is a yearly contest that puts this to the test. Participants are given a spec for a small piece of software, and must write a program in C that appears on code review to work correctly, but in fact subverts the requirements in some way. This has been remarkably successful.

Sure, having the code is better than not having the code, but I think that gives us less security than many assume it does.


Would someone actually looking through those entries trying to find a problem fail? Or is it just "first glance doesn't show any problems" stuff? I thought it was 2.


It would be great to throw entries at actual security auditors, mixed with innocent versions, and see how they fare.


We need a name for this. How about Bug-complete Turing Test? Or just Buggy Turing Test.


I'm down for naming it, but I think "bug" is the wrong term for what we're talking about here, since we're talking about deliberate misbehavior.


"we need the source code to be certain."

The source and a lot of effort, or just a tremendous amount of effort.


While you can never be certain that WhatsApp uses e2e encryption without a proper source code review but you can do the following to atleast check it on your side by doing the following. 1. Install Charles webproxy 2. Configure your device to decrypt the https traffic of whatsapp ( install the ssl certificate and configure proxy) 2.5 Enable ssl proxy for whatsapp. 3. Monitor whatsapp's traffic using charles web proxy. 4. If you can see random encrypted text somewhere in the request or response they are using e2e encryption.

I'll try it tomorrow might even write about it here or somewhere depending on the results.


As to "Use Free Software": the OTR protocol currently stands the test against various agencys and holds strong. I suggest to use software that makes use of it, e.g. ChatSecure. Also, If you wan't someone who's not a random person on the internet telling you this: go watch the talk of Jacob Appelbaum and Laura Poitras from the 31C3.

The Problem with closed source software is an will always be, that we can never be certain of it's security (at least not without reverse engineering every version and fully understanding it).


I dont know how exactly but may be you can do so by tapping network traffic via wifi router and analyse it.


I guess, connect on a computer using your phone VPN ability, and launch wireshark.


That only shows a bit of information on some packets. Maybe there is a switch in the code that can be triggered remotely which then sends out your data without any encryption at all. You really need to look at the source to get any kind of confidence with stuff like this, and even then there are a lot of possible issues if your data passes through servers owned by others.


The packets might be encrypted, but Facebook/WhatsApp _could_ still hold on a master key.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: