To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.
If this is done client side, it doesn't boils down to that. You can easily decompile and see for yourself what it does. You will gain quite a bit of notoriety if you are the first one to catch them too.
As he said:
> if we ever did it would be quite obvious and detectable that we had done it.
Assuming your device allows you to get the binary. Apple is already in a position to disallow this if they choose to in the future.
Theses kinds of thing never stopped anyone. Being the first to share a hash of a system file in a console is always an achievement that many hackers tend race to do when a new one is released.
For sure the harder it is, the less person will do it, thus the more theses things will be able to go under the radar, but for now it's not so much an issue.
Right now, at least on Android, it seems impossible to add a new contact without adding it to your phone's address book, then giving WhatsApp full access to it. If you revoke the access, you can keep talking to existing contacts, but their names disappear. I would expect that this is just a side effect of nobody caring/testing for the case, but it attracts less charitable interpretations (assumptions that it is intentional to force users to give access).
I genuinely believe that both from a software usability and network effect aspect, WhatsApp is the sweet spot among the secure messengers, and the trade-offs they made (e.g. key escrow for backups and encouragement to do cloud backups) were made in good faith considering the average user's needs.
Granted, it's not ideal, and not even feasible if you already use the work profile fully (with contacts you don't want to share with WhatsApp).
Can you tell us a bit more about the circumstances? Is it something you are exploring to better understand the approach of a competitor (WeChat)? Are you receiving pressures to implement this?
While I cannot speak for the Forbes' author, Schneiner is widely reputed as a trustworthy source, especially on matters related to information security. This article calls into question his professional reputation as a information security journalist or yours as an executive at WhatsApp.
As such, in order to help the general community decide for themselves, please shed some light on the following:
1. Does Facebook/WhatsApp have any specific plans for moderating content, via any mechanism, on the client? If so, please enumerate the kind/type of client-based content moderation currently in discussion.
2. Has Facebook/WhatsApp previously looked at doing content moderation on the client? If so, please enumerate the kind/type of client-based content moderation that was previously discussed.
3. What will you do, if Facebook/WhatsApp decides to implement content moderation and/or a content "backdoor" on the client sometime in the next 3 years? Will you continue to work for Facebook/WhatsApp?
4. Should Facebook/WhatsApp decides to implement content moderation on the client, what forewarning will Facebook/WhatsApp give us. What will you personally give?
5. You say that this is easy to detect. Can you please provide technical guidance (or pointers to such) on how to go about detecting this, so that the community at large may better learn how to detect this in any instant messaging app, WhatsApp or otherwise?
I ask the above, in all sincerity, as Facebook's previous poor handling of data requires these kinds of inquiries -- especially when in opposition to reporting by Schneier, who's reputation as a information security journalist is bar-none.
I looked at what WhatsApp promised to do against fake news (something where they had reason to promise harsh measures, since they were basically blamed for murders due to their forwarding features). I'm aware of restrictions and warnings on forwarding, but not some sort of 'fake news detector'.
> 5. You say that this is easy to detect. Can you please provide technical guidance (or pointers to such) on how to go about detecting this, so that the community at large may better learn how to detect this in any instant messaging app, WhatsApp or otherwise?
Reverse engineering their app. Doing it yourself is probably beyond the time you want to invest, paying someone to do it just for you is probably beyond the money you want to invest, but I'd really love if there was a group/entity that consistently checks (through reverse engineering and similar analysis) whether privacy promises given by apps are true, and most importantly, remain true over time.
>Respect for your privacy is coded into our DNA, and we built WhatsApp around the goal of knowing as little about you as possible ... If partnering with Facebook meant that we had to change our values, we wouldn’t have done it.
I'm sure you personally are a nice, honest and well-intentioned person. Unfortunately WhatsApp's corporate messaging has zero trustworthiness and should be looked at with suspicion. Even when the person saying it happens to believe it.
because yor denial, only covers the moderation sugar on top. the damage is already done. a long time ago.