Hacker News new | past | comments | ask | show | jobs | submit login

We haven't added a backdoor to WhatsApp. The Forbes contributor referred to a technical talk about client side AI in general to conclude that we might do client side scanning of content on WhatsApp for anti-abuse purposes.

To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.




Sadly it boils down to "trust us" (or really, trust wcathcart), which is a position users have been betrayed in countless times over the past decades (and Facebook has a horrible reputation for user privacy). Compare that with Signal—or any other application with an open-source client—where we can inspect the source code and compile our own client.


> Sadly it boils down to "trust us"

If this is done client side, it doesn't boils down to that. You can easily decompile and see for yourself what it does. You will gain quite a bit of notoriety if you are the first one to catch them too.

As he said:

> if we ever did it would be quite obvious and detectable that we had done it.


> You [can] decompile and see for yourself what it does.

Assuming your device allows you to get the binary. Apple is already in a position to disallow this if they choose to in the future.


> Assuming your device allows you to get the binary. Apple is already in a position to disallow this if they choose to in the future.

Theses kinds of thing never stopped anyone. Being the first to share a hash of a system file in a console is always an achievement that many hackers tend race to do when a new one is released.

For sure the harder it is, the less person will do it, thus the more theses things will be able to go under the radar, but for now it's not so much an issue.


For now. Until you need a tunnelling electron microscope to maintain the fiction that you still own the hardware.


I believe people would extend much more good faith towards WhatsApp if it was possible to meaningfully use it without exposing all (including non-WhatsApp) contacts to Facebook's servers.

Right now, at least on Android, it seems impossible to add a new contact without adding it to your phone's address book, then giving WhatsApp full access to it. If you revoke the access, you can keep talking to existing contacts, but their names disappear. I would expect that this is just a side effect of nobody caring/testing for the case, but it attracts less charitable interpretations (assumptions that it is intentional to force users to give access).

I genuinely believe that both from a software usability and network effect aspect, WhatsApp is the sweet spot among the secure messengers, and the trade-offs they made (e.g. key escrow for backups and encouragement to do cloud backups) were made in good faith considering the average user's needs.


As a workaround (not a proper solution), you can put WhatsApp in a work profile with an app such as Shelter (https://f-droid.org/en/packages/net.typeblog.shelter/) and put only the contacts you want to use in WhatsApp there.

Granted, it's not ideal, and not even feasible if you already use the work profile fully (with contacts you don't want to share with WhatsApp).


Oooh. That's a cool idea, thanks! You can then also send those apps through an always-on VPN for filtering/analysis.

https://developers.google.com/android/work/requirements?api=...


I'm glad of this intervention. Straight from my serious concerns.. You had a technical talk about client side AI.

Can you tell us a bit more about the circumstances? Is it something you are exploring to better understand the approach of a competitor (WeChat)? Are you receiving pressures to implement this?


I wasn't involved in the talk so can't speak in detail to it, but as I understand it the purpose of it was to explore spaces other than messaging. For example, one of the applications they showed was making abuse detection more robust to URL cloaking.


I appreciate your response, and will give you the benefit of doubt -- however this is in conflict with what a superficial reading of Schneiner's article suggests. Reading the reporting it would seems to indicate that this is happening/in process at Facebook.

While I cannot speak for the Forbes' author, Schneiner is widely reputed as a trustworthy source, especially on matters related to information security. This article calls into question his professional reputation as a information security journalist or yours as an executive at WhatsApp.

As such, in order to help the general community decide for themselves, please shed some light on the following:

1. Does Facebook/WhatsApp have any specific plans for moderating content, via any mechanism, on the client? If so, please enumerate the kind/type of client-based content moderation currently in discussion.

2. Has Facebook/WhatsApp previously looked at doing content moderation on the client? If so, please enumerate the kind/type of client-based content moderation that was previously discussed.

3. What will you do, if Facebook/WhatsApp decides to implement content moderation and/or a content "backdoor" on the client sometime in the next 3 years? Will you continue to work for Facebook/WhatsApp?

4. Should Facebook/WhatsApp decides to implement content moderation on the client, what forewarning will Facebook/WhatsApp give us. What will you personally give?

5. You say that this is easy to detect. Can you please provide technical guidance (or pointers to such) on how to go about detecting this, so that the community at large may better learn how to detect this in any instant messaging app, WhatsApp or otherwise?

I ask the above, in all sincerity, as Facebook's previous poor handling of data requires these kinds of inquiries -- especially when in opposition to reporting by Schneier, who's reputation as a information security journalist is bar-none.


> 1. Does Facebook/WhatsApp have any specific plans for moderating content, via any mechanism, on the client? If so, please enumerate the kind/type of client-based content moderation currently in discussion.

I looked at what WhatsApp promised to do against fake news (something where they had reason to promise harsh measures, since they were basically blamed for murders due to their forwarding features). I'm aware of restrictions and warnings on forwarding, but not some sort of 'fake news detector'.

> 5. You say that this is easy to detect. Can you please provide technical guidance (or pointers to such) on how to go about detecting this, so that the community at large may better learn how to detect this in any instant messaging app, WhatsApp or otherwise?

Reverse engineering their app. Doing it yourself is probably beyond the time you want to invest, paying someone to do it just for you is probably beyond the money you want to invest, but I'd really love if there was a group/entity that consistently checks (through reverse engineering and similar analysis) whether privacy promises given by apps are true, and most importantly, remain true over time.


>we have not done this, have zero plans to do so

See also:

>Respect for your privacy is coded into our DNA, and we built WhatsApp around the goal of knowing as little about you as possible ... If partnering with Facebook meant that we had to change our values, we wouldn’t have done it.

I'm sure you personally are a nice, honest and well-intentioned person. Unfortunately WhatsApp's corporate messaging has zero trustworthiness and should be looked at with suspicion. Even when the person saying it happens to believe it.


nice


And I totally believe you /s


what about the claims that the server has access to group chat logs?

because yor denial, only covers the moderation sugar on top. the damage is already done. a long time ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: