Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Ask HN: Ads triggered by WhatsApp “end to end encrypted” messages?
398 points by rreyes1979 on Sept 23, 2022 | hide | past | favorite | 453 comments
Hi all. We've been playing a silly little game with my wife lately: we send each other messages about some topic we never talk about and then wait for ads related to our conversation to start showing up in Instagram. As of the last month, they never fail to show up.

Please keep in mind that this is a conversation between two "personal" accounts, no business accounts involved. More so, we haven't accepted the new terms of use that "allowed" WhatsApp to access messages between personal accounts and business accounts.

Is WhatsApp scanning personal messages to target their ads as we are noticing? Weren't WhatsApp messages end to end encrypted? Is this a violation of their Terms of Use or am I missing something silly?




Some options:

1. Nobody is reading your WA messages, the same topics can be learned from your browsing activity or other msgs, eg. by reading your sms texts.

2. Meta is reading your messages directly in-transit, server-side.

3. Meta is not reading your messages server-side, but the Meta apps extract keywords from your conversations and request relevant ads from the ad servers.

4. Another non-Meta app is doing the above.

5...


If 2 is true, then it is not end-to-end encrypted, and I don't think that WhatsApp is lying. They have ways of doing their things without lying, so I don't expect 2 to be true.

I think that 1 is the most plausible, however the original post is about "topics they never talk about", so assuming that WhatsApp is the only channel and they don't leak data in other ways (and there are many other ways to leak data), then 1 becomes unlikely.

3 is the most compatible. All the targeting can be done locally, so no end-to-end unencrypted message leaves the app. The app then sends your topics of interests to Meta.

4 again assuming WhatsApp is the only channel, then there is probably some malware somewhere, and it is unlikely that Meta accepts illegally collected data (they can do it legally, better, and with less potential trouble). There are however a few legitimate apps that can do the above. I am thinking about things like predictive keyboards, accessibility apps (screen readers, ...), backup apps (end-to-end encryption is about transmission, not storage), and the OS itself. I don't think Meta controls any of these, and I don't think they would buy data from them (Google and Apple are competitors after all).

So I would go for an accidental leak (case 1). For example, for the experiment to be meaningful, you shouldn't tell anyone about the test topic before you receive the ads. Or with the WhatsApp app hinting Meta about your topics of interest.


Another thing that would make 1 happen even if they think they're not leaking information over different channels, is the software keyboard. GBoard is google's, and likely has some data collection in one way or another. Similarly, there's a lot of google-related services running with root privileges on stock android phones that could easily snoop on data from various apps. This effect is worsened by other android OEMs, like xiaomi or maybe even samsung, who ship their own invasive services on top.


Disclaimer: I worked at FB, but not on Whatsapp or ads.

I agree, I'm pretty sure 2. is not the case; I just listed it as a theoretical possibility. Despite all the bad press and problems, FB has very (very) high integrity and standards, at least the parts I saw.


Another option would be that META created a small model that could be run client-side and picked the right selection of ads to show elsewhere without even exfiltrating keywords.


Well, my wife sent me a picture of my daughter working on a puzzle. Less than 24 hours later, her Instagram was showing ads for a store that was selling the same type of puzzle as the one my daughter was playing with. So it's not just terms but images too.


Assuming Android: your pictures may be "parsed" by Google once they make it into Google Photos. Also, Meta may think that "parsing" the images in your local Whatsapp folder of Google Photos (or all of your local images) is fair game. Note that I have no clue if this happens, I'm speculating.


That doesn’t even make sense. Why go through the trouble of retargeting based on images? If you took a photo of something you likely already own it.


> If you took a photo of something you likely already own it.

Tell that to Amazon who never fail to recommend me things I just bought.


Hey if you really liked the vacuum cleaner you just bought, you definitely want to buy another one.


You have to believe that Amazon's algorithm is working as intended though, right?

The thesis is that recent vacuum cleaner purchasers are many times more likely (than the average person) to be looking to buy a vacuum cleaner.

Apparently about 20% of Amazon purchases are returned. And most returners are looking for a replacement. Some of the replacement product research is done before the return decision is made, so you get ads even if you have not initiated a return.

As much as Amazon doesn't want you to return your purchase, they really don't want you to buy the replacement somewhere else.

It would be interesting to measure how the ad ratio changes over time. Particularly when you exit the return window, but of course Amazon will know the return-likelihood curve with much greater precision.


Yes, for sure. Now I've got a different vacuum cleaner for every room. Never have I felt cleaner before.


Wait until you find out about whole home vacuums! Then you can turn The Whole House into a vacuum!


Looking through my photos, that's not the case. It's either things I don't own but like, places I enjoyed, or photos of things I want to buy / sent to my wife to buy. In my case, the photos would be a prefect targeting opportunity.


Mindlessly retargeting you based on things you own or have already bought is standard practice in the online ad industry.


Where did your daughter get the puzzle and when if she was just working on it :)


Yes, this is a bit similar to my 3. option, but more sophisticated..


Whatsapp FAQ:

WhatsApp's end-to-end encryption is used when you chat with another person using WhatsApp Messenger. End-to-end encryption ensures only you and the person you're communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp. This is because with end-to-end encryption, your messages are secured with a lock, and only the recipient and you have the special key needed to unlock and read them. All of this happens automatically: no need to turn on any special settings to secure your messages.

https://faq.whatsapp.com/791574747982248


> End-to-end encryption ensures only you and the person you're communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp.

This does not exclude an algorithm running running on the sender/recipient's App from scanning the content and sending suggestions to AD servers.


I thought that too, but if that is the case then it should be relatively easy to find this hidden functionality by decompiling the APK and exploring it.


> only you and the person you're communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp

I guess the key lies in "what is sent" in the above statement. The casual reader might reasonably interpret as "no-one except the intended recipient can see _what I type_". But it doesn't say that. It only covers what gets _sent_. It doesn't say anything about what happens to the content outside specifically _sending_ it to the other party(ies).


Even if that interpretation was correct in a broader sense, which I don't believe it is, certainly in this context of e2ee it isn't correct.

The more obvious explanation is that before or after the e2ee (i.e within the app itself), an algorithm scans the content, categorizes it and sends this to Meta/Facebook.

In this scenario, *Nobody* has read the content other than the person you're communicating with.


The local app running on your phone might even legally by considered "you" since it is running on your device under your user agent. I realize that is a bit of a nefarious take but I could see that being the case.....


Maybe Meta are not as trustworthy as I imagined.


Stop calling them Meta. It‘s Facebook.


don't die on this hill, it's all arbitrary and won't make a difference


Comcast rebranded as Xfinity because they knew everyone hated Comcast. Now everyone also hates Xfinity. It's fine.


So lying by ommission?


Can I not train a text classifier on encrypted text?

Basically, let the AI figure out what ads get clicked the most for a given string of encrypted 24h window of chat history. Eventually, the AI is going to hit on its “Rosetta Stone”, even without ever formally decrypting the text, much less any human reading it.

With millions of conversations happening on WhatsApp, why shouldn’t that be possible?

And it’s not even a breach, technically, because nothing ever got decrypted, and the similarity vector generated by the AI have, per se, nothing to do with the content of the conversation or the individual that sent them. Run the same training algorithm again and they’d look completely different! Hence they can’t possibly be “personal data” in the sense of the law.


No. Encryption means the data is scrambled. Essentially unrecognizable from noise, save perhaps for some headers.

If you can discern meaning from noise, then your theory would work. But discerning meaning from random noise is obviously impossible (i.e. what if there is no meaning?).

If you leak information than you say, then the encryption is worthless. Harmful, even, because you think you have protection when you do not.


It doesn't matter how many conversations are going. With private key encryption, what a phrase encrypts to for one person would be different than the next. It would have to be trained solely on your conversation. Encryption is also dependent on all the text before it as well as text in the same block, so it would have to be the beginning of the message with no metadata to throw off alignment saying the 16-byte phrase so many times that it could pick up a difference. I'm pretty confident it's impossible to get anything useful out of that.


If any algorithm can get even one bit of data about the plaintext then the encryption is broken by definition.


IANAL of course, but you should read the ToS carefully and you will probably find something that allows them to read your messages anyway.

e2e encryption doesn't forbid to read the messages as you type or read them or read a screenshot of the screen or whatever they can do inside an app :P

They were caught activating your camera by "error" a while ago https://www.macrumors.com/2019/11/12/facebook-bug-camera-bac...

As per the experiment you did...

We did the same experiment with a female friend a while ago. We started talking about her pregnancy (a topic we never touched, as she was single and of course not pregnant) in a group chat, specifically targeting her. Sure enough, after a couple of days her fb and instagram were full of strolley ads (but not ours) :)


We are seeing the same thing. More so, my wife sent me a picture of my daughter working on a puzzle. Less than 24 hours later, her Instagram was showing ads for a store that was selling the same type of puzzle as the one my daughter was playing with. So it's not just terms but images too.


The more basic explanation is that she has been served ads for puzzles like that for a while, based on previous history and maybe retargeting from the company and you guys only notice the ad because it's more salient after sending a message with the puzzle in it.

There's no reason you'd have noticed an ad about the puzzle in the wash of content and other types of ads.


putting on my tinfoil hat

yup, do you think all these img recognition stuff is made for fun? companies wants to use them to read our pics on your phones and profile us


Was the picture taken with whatsapp, or with the camera app? Do you have google photos installed? I GP classifies images, maybe also for advertisement?


She used the WhatsApp camera app. We have Google photos too, but we have it configured so that WhatsApp images don't get uploaded.


Are group chats also E2E encrypted?


One explanation I've heard for mysterious "We were talking about it in person but nothing else" ads, is that if you were connecting to the internet from the same Wi-Fi access point or IP address as someone else that did a web search on the topic or visited websites on the topic, it has connected you by way of shared internet connection.

Is it possible something like that happened?

In general, while anything is possible, my own occam's razor calculation is that if someone does have a way to get through ostensibly end-to-end encrypted messages, it's going to be government actors saving it for law enforcement/national security purposes. They wouldn't "waste" it on ad targetting. And if it's being secretly used for ad targeting so many people would know about it, people who aren't disciplined military bound by law to secrecy, that it would be quite likely to get out and be revealed and no longer secret.


Being on the same network is not even necessary. Meta can still see who talked to whom at what time, and that would be sufficient to correlate the interests of both individuals.


My thought exactly. As an experiment I would compare talking about new topics in person only vs. on WhatsApp. Actually there is a huge explosion of variations you can actually test: verbal communication with and without devices present (in case you also suspect audio recording), WhatsApp communication but no verbal, combinations, same network and different networks, etc.


WhatsApp Privacy Policy:

How We Work With Other Meta Companies

As part of the Meta Companies, WhatsApp receives information from, and shares information (see here) with, the other Meta Companies. We may use the information we receive from them, and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings, including the Meta Company Products. This includes:

...

- improving their services and your experiences using them, such as making suggestions for you (for example, of friends or group connections, or of interesting content), personalizing features and content, helping you complete purchases and transactions, and showing relevant offers and ads across the Meta Company Products; and


So most likely WhatsApp on your phone includes an engine that reads all your incoming messages and tells Meta that you are interested in some topic X based on your recent messaging history. Meta is not per se breaking the E2E encryption, but their app contains a backdoor that reports some topic-level information back to Meta that could be used to deduce what you are talking about without totally breaking the confidentiality of your correspondence.

(All this is just a guess based on OP's report and the above quote.)


What's scarier than secretly reading messages is the idea that we are being manipulated into believing that we thought of the "random item" all on our own, instead of it being cleverly triggered by a series of manipulative ads or posts from friends.

Or, a similar idea is that ad companies don't really need to know anything about you so long as all your friends are "unprotected".

For example, you may pick "lawn furniture" as your "totally random" item to test WhatsApp. What you don't remember is that a good friend mentioned lawn furniture to you 3 days ago and just did 14 web searches on Google and FB marketplace to find some. They have strong metadata ties to you, so you get served ads on that topic too.


I'm sorry, but this feels like a highly irresponsible FUD post to me. (And I am not a fan of Facebook in any way at all, so let's put that out of the way.)

For years and years and years, there have been people claiming their voice assistant (for example) is listening in on their conversation to show ads, and so forth. And it's always anecdote, never any hard data.

And the thing is, if this were the case, it would be relatively easy to prove with a controlled experiment that other people can replicate. And yet, somehow, magically that never happens.

Sure, Google used to algorithmically read your Gmail to show you relevant ads, but they were totally open about that, and then they stopped because it weirded people out anyways.

If Facebook were mining Whatsapp messages for ad topics, they'd probably be as open about it as Google was, out of pure self-interest. Because right now so much of their advertising is about how Whatsapp is trustworthy because it's E2EE etc. So if they were secretly analyzing messages, it would blow up the reputation of their main marketing message. There's a good chance it would be business suicide for Whatsapp. A profit-driven company probably isn't going to take that risk.

To be honest, this post feels social-engineered by a messaging competitor or something. I'm not saying it is, but the personal touch ("silly little game with my wife"), the innocent questioning ("Is... or am I missing something silly?"), and the total lack of any objective evidence (e.g. screenshots of messages and ads) are all HUGE red flags.

If Meta really is doing this, it's pretty easy to prove with hard data, and that's going to become a front-page news story on the New York Times. The fact that that hasn't happened leads me to think it's much more likely there's nothing here.


Are you trying to say that FB was never caught on privacy scandals before? Did it blow their reputation?


Not like this -- stuff like Cambridge Analytica revealed major lapses in security, but none of them ever contradicted their main marketing.

To the contrary, WhatsApp advertising E2EE so publicly can be seen as actively trying to get away from past scandals and building trust.

So the idea that they would be undermining themselves at the same time doesn't make a whole lot of business sense.


Assuming you're using a phone, is there a "keyboard app" that could be intercepting things before WhatsApp? other endpoint issues with security? Not that I'd be surprised to see a big company flatly lying about their product, but because they're so big I suspect you need to work hard to eliminate other possibilities before taking them to court.


No keyboard app. We use the regular Android Keyboard provided by Google.


But doesn’t Gboard send your typing data to Google?


Yes, it does. Also, DNS blocking will not work with many GApps phoning home.

Even Swiftkey keyboard (bought by microsoft) sends back telemetry to MSFT. Try a keyboard from Fdroid, but it may not be as feature rich.


But this is Instagram and WhatsApp. Are they sharing data about our conversations openly?


An interesting experiment would be the same thing you are doing but isolated in a note taking app to discard the google keyboard you’re using. Also, it would be interesting if you can use e.g.: proxyman (available directly on iOS), or some proxy on your PC to intercept your network traffic and then try to reproduce while blocking/allowing some domains. Especially, blocking all google domains, then facebook domains, etc. If you have a Pi-hole set-up doing that at dns level may be easier.

I’ve never been able to reproduce these experiments. But keep in mind that I’m european and my WhatsApp app is slightly different - it is a version from WhatsApp Ireland (instead of WhatsApp Inc) which shares less data with third parties, the privacy policy is also slightly different for the european union.

Edit. Another idea: try to reproduce while disabling predictive text on your keyboard.


Who is they? Google or meta?

Google are definitely collecting data from gboard.

That may not be directly shared with meta but is likely to get indirectly shared through overlapping advertising identifiers. They won't be openly sharing your text, but they will be scanning to and flagging you as having interests in something in your text or something related to what you said, then sharing that with advertisers.


surely that just completely subverts e2e encryption??


so it's clear, your gboard si harvesting all input you type and then sells these data to highest bidder, nothing to do with whatsapp, just good old Google harvesting everything for ads


Facebook has an extensive history of grabbing people's data, lying about it, being caught and fined, apologising, then doing it again. So it's absolutely in character.


I am 99% sure Meta/Facebook have secretly broken WhatsApp e2e encryption by adding a second key to all users.

I have security code change notifications enabled, and around November 4, 2021 a large number of my unrelated contacts suddenly had security code changes. There wasn’t any media reporting at the time, but I remember some others mentioning it on Reddit[0] (would love if anyone here can scroll back in their message history and look for security code changes around the same time - maybe we can finally shine some light on this).

Since then I have assumed they are flat out lying about the fact that “not even WhatsApp can read your messages” (direct quote from the iOS app).

Also note that both iMessage and WhatsApp strongly encourage you to enable iCloud backups, which are not e2e encrypted and readable by Apple (Apple only claim backups are “encrypted” and that messages are “e2e” encrypted):

https://www.rollingstone.com/politics/politics-features/what...

At least Apple are not flat out lying like Meta, but they are still being incredibly deceptive with their marketing.

Use Signal if you care about e2e encryption. Everything else is a marketing slight of hand.

[0] https://www.reddit.com/r/whatsapp/comments/qm2ufw/security_c...


"Also note that both iMessage and WhatsApp strongly encourage you to enable iCloud backups, which are not e2e encrypted and readable by Apple"

-> That's not completely true (at least for WhatsApp): It is possible to enable a e2e encrypted backup right in the chat-backup menu.


You are right, I should have said they WhatsApp messages are “not e2e encrypted by default”.

However, I still believe Facebook holds a second decryption key for all messages, which they rolled out along with their web access product as described above. So they are not e2e encrypted by any reasonable interpretation of the phrase.

I am not aware of any way to e2e encrypt iCloud backups, so the vast majority of “e2e encrypted” iMessage messages are readable by Apple.


I'm going to say the same thing I always say when it comes to E2EE.

E2EE does not mean anything in a world where both ends are owned by the transport layer.

I'm not saying they're doing anything wrong, you could be mistaken and information can be exfiltrated some other way.

But: Either you trust the transport layer or you don't. Saying "E2EE means the transport doesn't have to be trusted" while running a neigh impossible to reverse engineer binary on both ends distrubuted by the network --- *is* trusting the network.


I think Meta is reading your messages locally on your device and showing you personalized ads from the messages that are actually on your device. It's not uploaded on Meta's servers and not in anyway breaking the e2ee, because your device is one of the 2 ends. If you don't use Facebook or Instagram on your phone then no personalized ads is shown.

Everything above is supposition from something I vaguely remember but not 100% sure.


Whatsapp isn't open source. How do you know that the messages are actually e2e encrypted?


Decompilation. Reverse engineering. Network monitoring. Third-party attestations like https://research.nccgroup.com/wp-content/uploads/2021/10/NCC.... The lack of whistleblowers from within Meta itself. There are are hundreds of employees working on this product.


LOL no need for elaborate MITM efforts if both endpoints are completely closed source and totally powned and corrupted.

Of course that reads backwards just as well, no need to implement complicated "end to end encryption" if both endpoints are hopelessly powned.


Every few months someone thinks they've proven that these companies are recording us via our phones, scanning our messages secretly, etc. What's more likely that these companies are going to break the law but manage to keep it a secret that not one engineer who worked on these features speaks out after all the whistleblowers that have come forward or that you're that predictable they can guess what you're wanting/thinking from what you content they already serve you?


I can attest I've seen this behaviour and played this game with a friend. We concocted obscure conversation points e.g "Flamingo statues" and not long after would get ads in the right ballpark of relevance on Instagram. Hard to know if it's nefarious as it could be mere coincidence or confirmation bias.

Tangental aside; it still confounds me where the business opportunity of WhatsApp resides for Meta if they "can't" get access to the data.


That's nothing. I talked to my wife in bed about sex and then immediately received an email selling viagra and cialis. It's crazy, but I think gmail must have put a microphone in my pillow. It's the only way.

I was sleepy so when I woke up I tore apart the pillow after brushing my teeth but there was nothing there. They must have taken it out surreptitiously when I was in the bathroom.

I know it wasn't my wife because I put her phone in a Faraday cage to stop her from using the Internet when I'm home. Unless... now that I think about it. Unless she's secretly working for Facebook. She said a friend of her's could get her a Portal webcam. No one buys those things unless they're working for Facebook!


For a few years, WhatsApp "e2ee" messages were stored in plain text on your Google drive backup (that's how it worked on Android. Don't know about apple). This was even stated in their FAQ.

This was as part of a FB/GOOG deal where the storage for WhatsApp backups did not count for your Google drive quota.

Recently the backups did finally become encrypted as well. With a key known to the WhatsApp app. (On Android, stored in a file called "key" in the apps local storage)

However, when you restore the backup, where does the key come from? From the WhatsApp servers, obviously.

So still, FB and GOOG together still have full access to your daily backed up messages.

And the free storage deal is still there, of course.

Please do correct me if I'm wrong and you know better.


I understand your point. Their track record regarding handling users data is bad, so egregious that in Europe laws are being crafted to force them to keep users' interests at heart.

Let me tell you about another third party that is not mentioned here. The telecom operator. Once, in 2019, I was reinstalling WhatsApp on my smartphone. I checked the Internet traffic records (outbound) with curiosity. This led me to finding out that my phone was reaching out to *whatsapp.com (Meta's servers) and Belgacom (Belgium's telephony provider). So, my phone's data was being routed through the parent company's devices and some other third party services like a national telecom company. I don't if it's still the case nowadays, but since that day, I have more worries about their encryption and data relay practices.


"WhatsApp's end-to-end encryption is used when you chat with another person using WhatsApp Messenger. End-to-end encryption ensures only you and the person you're communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp"

This has always been disingenious.

WhatsApp control the client, the client displays the unencrypted message, ergo WhatsApp can read the message.

It provably does when it interprets links and does a web page preview card.

Also... that is highly likely leaking your advert profile as even if the preview didn't then any visit to the website is outside of WhatsApp and is now tied to your IP, browser cookies, etc.

All of the above can be true without end-to-end encryption being broken or otherwise defeated on the server side.


Could it just be confirmation bias? Or could it be that the correlation is the other way around: You see something online, decide to pick that as your topic with your partner (unconciously) and the ads show up because of the initial location you have seen the topic.


The best way to solve this would be to pick three topics ahead of time. If you're worried about the phone's microphone, do so by pen and paper.

Then randomly select one of those three topics to discuss on WhatsApp.

Finally, keep an objective eye out for ads about all three topics. Ideally, log every single ad you see.


Better to have a computer pick random topics.


While that's possible, the more plausible explanation is Meta reading your messages imo.


Is it? I feel like this would be easily reproducible and would bite them in the ass big time.


It's a shame this kind of thing is so hard to prove, otherwise it would be all over the media. People will write it off as 'coincidence'. "Perhaps you looked for or discussed it elsewhere".

What happened with Skype before was that Microsoft would ping any links from their servers, so it was really easy to prove it by generating a new web server, publishing it nowhere and then mentioning it in a chat. This caused some publicity and they stopped the practice. Skype didn't guarantee E2EE at that time though.

But perhaps you could do a similar 'clean room' excerise to prove it. I don't think they would break the E2E by the way but perhaps there is something calling home in the app itself.


It's not hard to prove at all. Randomized controlled study organized over the phone so it's subject to legal protection from surveillance, unlike random FB messages.

If 200 people send preselected product communique over brand new devices and are 800% more likely to be targeted by ads for those or related products by the control group of the same size then we have enough for a legal case and discovery.


That's still a lot harder than to send 1 link by 1 person and immediately find it grabbed :)

It really should be airtight if they're not going to weasel out of it.

I kinda doubt Meta is doing it but on the other hand they do seem to be in heavy weather right now.


It's incredibly easy to prove. You could prove it by setting up clean devices.

Plenty of people have attempted to prove it, and gone radio silent after trying.

There are billions of users, and tens of millions of them are technical, and tens of thousands of them work for these companies. Yet there's 0 evidence of this ever actually happening. Does this not seem strange to you?


It's incredibly easy to prove.

The reason it's not all over the media is precisely because it doesn't hold up.

Anybody who's dedicated to this can select truly obscure terms, fully document their private chats and full internet usage and ads shown, and show whether this effect is actually happening.

The reason we don't hear about this is because the snooping doesn't appear to be happening. So you just get a bunch of people sometimes claiming it "seems like" ads are coming from private chats, because coincidences do happen statistically so it will always happen to some degree to some people.

The reason it's not all over the media is simply because the phenomenon doesn't appear to exist, not because it's hard to prove.


consider the following scenario

type a message -> msg encrypted -> msg sent -> msg received -> msg decrypted -> msg viewed in app

Then consider the following:

type a message -> msg encrypted -> msg sent -> msg received -> msg decrypted -> app scans content and sends classification to ads server -> msg viewed in app

Both are end-to-end encrypted.


This is likely the explanation. I put signal on phone I initially used exclusively for work stuff and the other day and chatted about some car tires. Next thing we know, tire ads in mobile browser. Main PC ( which has linked signal installed ) got generic ads when I looked online suggesting one of the apps is grabbing info from keyboard.


Whenever you post a link in Whatsapp, the app tries to create a card from the metadata available in the link. The procedure to make the card presumably goes through fb servers and is outside the e2e. Could they be using that?

Another hypothesis is that you are taking other steps such as searching for that topic so that you can send something to your wife those extra steps might be enabling tracking.


I often wonder this about unsolicited media text messages (which, for me at least, are almost all political) and whether they might use the iMessage automatic “preview” function to track whether or not a message has been opened/viewed. As far as I’m aware it’s not a feature you can turn off on iOS.


I’m pretty sure the iMessage previews are generated by the sender and sent with the message itself - the recipient doesn’t parse the link.

Can easily be proven by sending someone a link that only you can access (hosted on the LAN which the recipient can’t reach) and seeing if the recipient still sees a preview.


Well, yeah. Of course lots of companies are targeting us for ads based on data that we lightheartedly assumed was private. It's even possible that they could do this without violating their ever-changing privacy policies. For example, they could say that the programs that review your messages don't retain any personally identifiable information. It might even be true. (Although there's no auditing of this and little or no accountability.)

Online ads ... if you've ever paid for one you know they are desperately in need of targeting. We consumers provide our info directly to folks who sell ads, under terms and conditions that we don't understand. Of course they're making use of this free resource. They'd have to be idiots not to.

So ... with respect, the wave of denial in the comments here ... 10 years ago, that would have seemed "naive but understandable." Today it's just weird. Almost like some kind of absurdist comedy. It's totally disconnected from the world we actually live in.


The app that you type your message into is controlled by Meta. They can do whatever they want with what you type and THEN encrypt the message and send it to the other person.


OMG, no, it's "END TO END." That means WHATEVER I NEED IT TO MEAN. Even if the provider promised they ARE using my messages to target ads, THAT NEEDS TO NOT BE HAPPENING.

Irony notice. This message contains irony.


WhatsApp make no specific claims about who this encryption is keeping you safe from. And they also require you to agree that they can use your information and interactions for their legitimate business needs. I mean, WhatsApp is standing right there when you stuff the message into the box regardless of how safe the package is in transit once it's left your phone. And consider basically every 'enchancement' to security or privacy around Facebook was done under duress for years. Pre-acquisition WhatsApp is a different story, but that story is ancient history.

I didn't agree to the recent WhatApp nor Facebook's TOS so no longer have their product on my devices. I suggest you do the same, or just sit back and enjoy the specialised, relevant, targeted ads, but think twice before each send.


What I desperately want at this point in time:

A button next to every ad that says "why me?" that details every byte of my data scraped to generate that specific ad. Was it a GPS location from an hour ago? Did you scan through my photos? Did you figure something out from my youtube watch history? TELL ME!


The small blue adchoices triangle logo in the corner of some ads, if clicked, will sometimes tell you some of that information. Like who served the ad, if they targeting was from the content of the page or an audience segment, etc.


That’s technically required by the GDPR - any data controller must give you all the data they hold about you upon request and be able to review or explain any automated decision. This would include ad targeting data.

The problem is that the GDPR isn’t enforced enough even at a very basic level, let alone technicalities like this.


What types of phones are and your wife using? My personal theory is that a lot of these situations are actually the phone itself (or a third party keyboard app, apps copying clipboard content, etc) doing the “spying” and not WhatsApp somehow bypassing the E2E encryption.


Can someone working on WhatsApp weigh in on this or are there very restrictive NDA's? I would expect it's an interpretation of the TOS, so WhatsApp should be able to communicate if this is a possibility or not.


I work on a competing product (not for any of the named companies in this thread).

I don't think there is any fault with the e2e encryption. Humans are very bad at seeing causality when there is none, or accidentally leaking their thoughts into the search box.

There could also be leaks with the clipboard, photo gallery or keyboard - all things that freemium apps love to scan in the background. The way real-time-bidding ad markets work, anyone that the data leaks to can influence ad ranking - doesn't have to be FB/Meta.

If you did a true blind study, I think you'd find no link.

For example, start with a list of 1000 products images and accompanying text. Select 2 at random each day. Flip a coin to decide which to send (keep the other as control). Cover the screen so the user can't see what they've sent/received. Then, a few days later ask the user to select which product they think they sent.

I'd bet that even after months of doing this, there will be no finding of a leak.


guess nobody from meta could comment on this without a legal writing the answer for them


Why would they want to even if they could? Nobody would believe their answer.


It is unlikely that Meta is able to decrypt the messages on their side. The WhatsApp desktop client is actually very crippled by the fact that it can’t just ask someone for the messages, it is another participant in end to end encryption and needs the shared keys. If they would defeat end to end encryption they would first implement the desktop client in a more functional and easy way.

But it is possible for the client itself to build a map of advertising id -> interests and send that over to meta separately. This would be similar to one of chrome's proposals.


The old joke is that if you want to increase the quality of the ads you see, google rolex and Rolls-Royce for a week


Another idea: test a null hypothesis. Block with your pi-hole or a proxy all the facebook traffic so the messages won't work. Then reproduce with the same exact behavior with your wife (it would be better if you both could get a clean identity and then repeat the exact same conversations and topics, but getting a clean identity is not that easy) so all the external factors are the same except the facebook connection itself. It could be even more granular by trying to block just enough so it won't send the messages, allowing facebook ad trackers, etc.

If you try this several times, the messages are not working and the corresponding ads show up you can be sure that it is not because they are reading your messages. Which does not rule out the possibility that they might read them, but at least you can be 100% sure that your ads are not showing up because of your messages in this case.

If they do show up you could then try discarding other factors like client-side keyword analysis e.g.: talking about very generic things which are not useful to ad trackers like "how you going?", etc (ie. awkward elevator conversations), but it is harder to test a null hypothesis for client-side keyword analysis.


just pointing out - pihole can't block web traffic, only the DNS lookups. If server IP addresses are hard-coded, there's nothing pihole can do.


Yeah, I recommended Pi-hole because it is easier to setup than a proxy, it is very easy to see if it works - just try to send messages. The last time I tried it you could block whatsapp traffic by just blocking the domain at DNS level.

Of course, if it does not work, use a proxy.


I'd ask myself another set of questions: - can they extract information from conversations and exfiltrate it in a stealthy or obfuscated enough way so that they won't be noticed or have plausible deniability - do they have incentives to do so (assuming the absence of liability described above) - do they have a track record on related topics that makes you confident in the fact that they wouldn't act that way

My answers being yes - yes - no, the question of 'do they listen to target the ads they try to make me display' is pretty irrelevant to me. I can't trust them not to nor check reliably if they do.

If you try to address a different question such as 'do they really encrypt reliably to protect your conversations from being snooped on without their authorization', the threat analysis may differ. In that case they have incentives aligned with yours and are probably faithfully trying to effectively protect your/their data.

At the end, I'd estimate the probability of the scenario and how I value the consequent loss of privacy. Then accept/mitigate/refuse the risk accordingly.


All these corporation apologists... why wouldn't Meta/Fecesbook do this? You can read about corporate over-reach on a daily basis - and that's just what we find out about. Here are some of todays privacy headlines on the register, right now:

Significant customer data exposed in attack on Australian telco - Subscribers have questions – like 'When were you going to tell us?'

Boeing to pay SEC $200m to settle charges it misled investors over 737 MAX safety - Ex-CEO also on the hook for $1m after skipping over known software issues

Privacy watchdog steps up fight against Europol's hoarding of personal data - If you could stop storing records on people unconnected to any crimes, that would be great

Meta accused of breaking the law by secretly tracking iPhone users - Ad goliath reckons complaint is meritless – but it would, wouldn't it?

Federal agencies buying Americans' internet data challenged by US senators - Maybe we don't want to go with the netflow, man

.. and I'm only halfway through!

Did you think the rule of law is there to protect you? Do you think corporations won't break the law to get access to information? Have you learnt nothing?!


I've told you again and again that if you value privacy you don't use WhatsApp, you use Signal.

Personally I use Telegram which works fine for me and I have taken a fair amount of flak for saying it is a better choice than WhatsApp.

I'll still try again: There is more to security than protocols and algorithms. If you value your privacy, don't use a free messenger from a company with a long record of sleazy behaviour.


Well, even end-to-end encrypted in WhatsApp doesn't eliminate metadata that is being collected about you. Even then, unless you are using a keyboard that is specifically not gathering information, the keyboard you type your message with might be sending keywords/metadata forward. Combine that with the rest of the information being collected about you and whatever you speak about, isn't so secret anymore.

It's very easy to blame an application, but the problem with the modern ecosystem is that it's all very interconnected. Signal makes a point of having a setting that sends a request to the keyboard to disable personalized learning, but even that is a request. There isn't a guarantee that it complies.

Companies that deal with data will not use a single source of information, but a huge variety of sources and your smartphone is like a huge vacuum that is pulling in everything it can gather from you through any means possible.

Lastly, it could also be observation bias as others have mentioned, but to truly be able to regain control, you would need to take a variety of steps to make this change.


try an external source of true randomness for choosing your test topics. choices that seem random to you may be totally predictable.

i know that's wild, but also often true. humans are bad at randomness. there may be no direct leak at all of your test topics, they might just be guessable based on everything that is known about you, people like you and things you've been presented or looked at.


You also have to factor in confirmation bias-type effects where when you are looking for something everything seems related. If you are seeing dozens of ads a day on Instagram and suddenly you have some "random" topic in your head you will mentally connect them.

Maybe this could be counteracted by something like:

1. Generate multiple random topics and only send one across WhatsApp. Count "related" ads for each.

2. For every other random topic don't send ti across WhatsApp and see if you still find "related" ads.


that would work. i would be utterly shocked if an experiment like that still found related ads. (you'd probably also want a prior on the baseline prevalence of various topics.)

assuming that's all true, good news is that they're not doing any totally illegal spying. bad news is, they don't have to.

> If you are seeing dozens of ads a day on Instagram and suddenly you have some "random" topic in your head you will mentally connect them.

ahh yes. incidentally that sort of behavior is often linked with the onset of poor mental health. there are some interesting questions around the limits of personalization and impacts on mental health. if the machine behaves like magic, specifically on a personal level, does that encourage magical thinking?


Pcaps or it didn't happen.

People have been claiming this for years, and yet we have never seen actual evidence. I completely understand being creeped out by the surveillance shops, and I've seen coincidences that weirded me out.

But if this is going on, then there is network traffic about it. And busting FB with real proof of audio surveillance would be a massive feather in some researcher's cap.

I don't buy it.


I keep being told this is a conspiracy and that several people checked on this and found no evidence of them spying on you, but I still have a hard time believing they're not doing something really sketchy.

The case that was the final straw for me was when I was chatting with my partner and remembered a funny song from my childhood, so I opened YouTube on Safari and showed it to her. A couple of minutes later, she opens Instagram (on her own phone) and the first "follow suggestion" is the artist from the song. She had never heard of this song before, much less of the artist (which is not famous at all).

I would understand if everything happened on my phone/accounts, but the suggestion was on her phone and account. I don't think they're literally listening to you, but there's definitely a GPS-based user relationship table somewhere which reflects what you do to everyone they think has some connection to you and is physically close to you.


The reply all podcast has a nice episode on this, #109. Whether your phone is listening to your conversations.

They conclude that ads are shown to people you know. So you search for a product and then your partner gets ads for this product too, as they know you spend a lot of time together through other tracking methods.


> definitely a GPS-based user relationship table somewhere which reflects what you do to everyone they think has some connection to you

I've seen this when I am sharing IP addresses with someone because we're both connected to the same WiFi. Especially if you block meta from being able to get any data about you but the other person doesn't. Meta then get data from that other person because it can't see anything from yourself.

It helps when I'm looking for a gift for my partner - I can see what she's been looking at recently because its most of my adverts in Facebook.


Pretty sure this can be explained without GPS, if you were both using the same internet connection (e.g. home WiFi NAT’ed through the same IP from your ISO).


You are both on same wifi


Just because they’re on the same wifi doesn’t mean FB gets to break the SSL connection between OP’s (separate) client and YT to snoop on details of the video being played, to then suggest actions on their, competing IG platform, on a separate device.


As suggested by others, I would also recommend going one step further and turn this into a proper experiment, e.g.:

- systematically record how often this happens, in contrast to other ads, and for each of the reason topics

- record for each random topic if you have also mentioned it anywhere else, e.g. in a Google search it some other digital media

- make the choice of random topics more random, ie. not depending on current moods (which might be biased through subtle, external nudges

-...

These are of course just pointers, and by no means a proper experimental setup.

I'm aware that this might take the fun out of your playful approach. However, you might be surprised by the results, in whatever direction. Also, it would give you a much more grounded fundament for further discussion. Of course you can just keep doing it the current, less tedious way. I'm only suggesting it because you seem to be interested in the topic and it might be more satisfying for yourselves to turn this into a little citizen science project.


When they say that only "you" have access to read the encrypted message (at the end) could they be being disingenuous with that interpretation?

Take this scenario for example:

1. E2E is not broken in anyway by Meta/Whatsapp. In this scenario only both WhatsApp clients (and thus you and the other person) have access to the messages. This is required for you to even read the messages in the first place.

2. The WhatsApp local instance is running on YOUR device under YOUR username / digital identity. From a legal perspective is it possible that since the app is running under your username that it is also considered "you" ?

3. If number 2 is true then it might give the local WhatsApp instance legal shield to read and do anything it wishes (locally) with the message content. And then of course this could be sent separately back to Meta/Whatsapp in a very small format easily mixed in with other traffic.


WhatsApp embeds google analytics into your client, which means that it does tag all your messages while you are viewing them (and also tags them with each open/close of the conversation).

If you don't know this already, use App Warden to remove spyware handlers on Android and use RethinkDNS to block their ad domains.


What does "tag" imply? What exact data is supposedly taken by Google?


First of all: I don't think you're crazy. I also don't think WhatsApp is the one leaking your data.

My counter argument: I use WhatsApp all the times and nothing I talk about on it ever shows up in my ads. A hefty amount of adblock may help here, as does the fact I live in the EU where the worst tracking is illegal.

Something on your phone is probably leaking data. Most suspect are third party keyboards, accessibility apps, apps with access to your photos and videos, or even Google Assistant. Third party keyboards can easily track what you're typing, accessibility apps can parse what you're saying or typing, and Google Assistant will take a screenshot of your current screen when you invoke it.

Other options are clipboard scanning (i.e. on older operating systems) and perhaps link preview services breaking out of e2e.

Finding what app is selling your information is difficult. For starts, you don't know which device is leaking. Ad companies are smart enough to see the connection between you and your wife. Her search results alone can probably make ads appear on your device!

Also consider the Baader-Meinhoff phenomenon. You can only track special topics if you track the topics of all ads and apply some statistical analysis. If you get blasted with ads all day, you'll notice the ones that you're on the lookout for. Pausing your scrolling through the app to take a screenshot will then reinforce the e-stalkers' algorithms.

If you have two old phones lying around, try repeating this trick with phones that are completely wiped, without any Google account logged in, with firewalls to block anything but WhatsApp from talking to the internet. I bet you'll find that those devices won't generate ads.

Why do I think that? For starters, enthusiasts decompile and analyse WhatsApp APK files all the time, in search for rumours and beta features to report about on tech news sites. If at some point WhatsApp added a secondary information channel about your messages (whose encryption is reasonably proven), reporters would've made a HUGE story out of it. A single line of decompiled code can send tech outlets into a frenzy of Meta accusations and let loose the EU's regulatory commissions for lying to customers. It'd be the scoop of the year!

Personally, I think "Google's keyboard or Instagram's gallery scanner is leaking my data" is a lot more likely than "WhatsApp has never been analysed enough to find the magic leaking code".


Are you sending each other links or just mentioning the ads in text?

If this is just in text—and I'm definitely not defending Meta here—could it also be that the ads you see have got us so figured out already? The topic you choose to talk about may be influenced or seeded by your environment (online/offline), and one thing leads to the other almost deterministically.

Here's an experiment: try rolling a die a few times or using a random number generator to pick one word or more from a list like the EFF wordlists [0], and then talk about that exclusively.

[0]: https://www.eff.org/deeplinks/2016/07/new-wordlists-random-p...


I see a lot of tinfoil theories in the comments but no one mentioning "url unfurl" capabilities. You send a link, it's encrypted end-to-end, it arrives to the other side, and the link is unfurled for display. Bam, Facebook knows you sent that link.


Every time I buy a new car, there suddenly are a lot of the same model everywhere I look.


From first-hand authority, some encrypted/secure apps have client-side feature detection. Leaking features doesn’t count as violating a standard. May be running something like a named entity recognizer or keyword/category recognizer that is “sufficiently” anonymized. This on photos too; of course this can be adjusted for device parameters, battery availability, and geolocation. I cannot speak to WhatsApp specifically, but note this directly from at least one other popular messaging app. I would absolutely assume no encryption exists. Rats in the opera house.


It could be the keyboard on your phone that's essentially keylogging everything you type and sending it to advertisers. I have no idea whether this may have really happened, but it's certainly feasible.


My understanding was always that whatsapp controlled/has access to the key, so they can decrypt anything anyway. It wouldn't be surprising that "end to end" means that it includes you, your partner, and whatsapp.

it wouldn't be surprising that whatsapp gleans info from your comms and builds a profile of you, from which ads get injected. whatsapp is not selling your actual comms, but the likelyhood you'd be interested in certain things/products. sort of like how the three names supposedly only store metadata of your calls, not the actual call.


Its very unlikely. Not because FB is a kewl company with super kewl morals.

If you look at through the lens of game theory, the employees are extremely incentivised socially, ethically and financially to leak it.

First they didn't break E2E because its very hard to do it without people knowing.

So know we are talking about a "soft break" where they search/send key words before e2e kicks in. They wouldn't be able to that without quite a few employees knowing.

Let alone those super nerds who spend insane amount of time reverse engineering these apps and spoofing network requests just to see wassup.


Completely agree, there's a ton of incentive for any security researcher to find E2EE leaks on the world's leading messaging app. In addition to that, Meta would be in ton of trouble if they did from a legal point of view.


"End to end encrypted" on proprietary code base from META of all companies, with no ability to publicly audit?

I am to this day baffled by gullibility of people believing that WhatsApp is E2E encrypted.


Recently I was booking a trip to Finland, so my booking app showed me suggested destinations to other nearby countries like Estonia. That's normal.

Then, my friend asked me where do I want to go the most if I am to go scuba diving. I answered "Phillipines". My friend then said "Maldives is also great". We never searched for anything, just casual conversation. A few minutes later I look at my booking app, guess what were the top suggestions - Maldives, followed by Phillipines. Must be coincidence.


If the confidentiality of your messaging is a concern, you shouldn't be using whatsapp anyway or most closed source software. There's mostly no point in speculating, because it will be hard to verify the extent of information leakage from the vendor anyway.

It would be better to use signal or element, something that tries to solve the key exchange problem. And if you are even more concerned, run their respective server software on your own hardware. Then you can inspect what goes in and out.


Please note that it could also be the keyboard app sharing stuff.


I'm pretty sure even the mics are listening to what we say since for years it's been happening that when I discuss something with my girlfriend ads about it start to show up. We don't even use WhatsApp, didn't search about the said company on Google or Facebook, nothing. We just discuss something about it with our phones on our side and suddently ads start showing up.

The fact they are spying on WhatsApp messages isn't really surprising.


There has been a couple of studies on the topic that observed network traffic and never were able to found anything. So despite the persistence of the rumors, so far this remains a conspiracy theory.


Speech to text isn't difficult to do and within the the capabilities of Meta. I even wrote a basic proof of concept on what it might look like: https://github.com/smuzani/soundrecorder

It's not too hard to transcribe text into a few kb, or summarize them further on the client, and then upload that together with the rest of the data.

I'd be very interested in seeing those studies and whether they match up.


I don't think end-to-end encryption means what most people in this thread thinks it means. Yes, it's encrypted between devices. However, when I open Whatsapp on a new device, it pulls all my message history for the last 5 years. I wouldn't be able to do so without the keys. So to facilitate that, they must be storing the keys and sending them over on their end.

And if they have the keys then they can still read your messages!


Sure, it's end-to-end encrypted, but it means nothing when Signal, Meta, Apple, Google etc. are conveniently keeping a copy of the encryption key.


They probably have the key, but decrypting takes server time and why bother? They are the ones who encrypt it and decrypt it. Just scan the plain text. Which they must do in order to encrypt the message.


Let me share a plausible scenario which may be a bit on conspiracy theory side :) Yes, WhatsApp uses secure protocol which prevents anyone, even WhatsApp servers from reading your messages. However WhastApp mobile app, developed by Facebook can see all your unencrypted text while it is typed or shown on the screen. So the ad targeting may be happening then.


Messages sent are encrypted, but what about the keyboards. I know Samsung logs all key strokes for ad purposes. Also WhatsApp backups maybe? Are they encrypted? What if another app is reading off of them. Or screen time apps, most of which are selling your data and need permission to read everything on your screen to block apps and content


Ok, so you have the data now. You seem to be a person who cares about privacy, so why do you still have a WhatsApp account?


Meta owns the code. You type messages into a little text box, Meta (the whatsapp app) takes that plain text and encrypts it. There's no way that they do not have access to each and every single message you type for enough time to collect data on you.

And, given it's Meta, there's no way they are not doing this.


That doesn't mean that they are collecting data related to the message themselves; if you can get proof of that, you'll make a ton of money! Whatsapp doesn't operate in a regulatory vacuum, and they would be in big trouble if they broke their E2EE/privacy promise.


I would love to see a control channel in this experiment here.

Pick two topics every time. Send one topic to your wife on Whatsapp. Write paper messages to your wife about the other topic and give it to her.

Track how often you see advertisements for topics in both channels. A significant difference in any one channel will be worth sharing.


Prediction: they will still see ads for the topics shared in writing because the surveillance being done is based on recording the inner speech of everyone with microwave imaging and machine learning.

Encryption is useless against advanced covert radars. Big Tech knows and benefits by lying by omission.


I had a video call with my mum on signal from her android phone to my macbook. Then I got very specific video recommendation in YouTube about our conversation within the same day. Her android is oppo. Could it be leaking the signal call and then cross match me with the phone numbers to my google account?


Yes, it’s possible. If you have an android device, it’s possible your device (or a ‘bad app’) is sending signals too.

An easy way to test is by talking about something (but not researching it) that’ll put you “in market” for advertising targeting.

You must say things to show you’re in-market. Eg. “I really want to buy a new AWD pick up truck” — include some brand names and specifics, which will give the spy device more confidence about the ads to show you, something like “Toyota Tundras seem better than the Ford F150, but maybe I should get the Hummer EV truck”.

Try variations of this “conversation” a few times over 2 days to help give the ML model confidence. Then, monitor ads on social media, pre-roll ads and new ephemeral recommendation tiles on YT that suddenly feature videos on this topic.


More likely something on your end was listening, if you’re the one who got the ad.


I realize that real life is more complicated than this, but if you tell companies "privacy is important to people" and "you will make more money by reading their messages", it is safe to assume companies will assure you that your data is private while using it to make money.


Most users activate WhatsApp's suggested "backup" feature which uploads all chats and those are most likely not E2E encrypted. And of course Meta could just directly read and upload stuff from the app, they probably buried a vague clause for that somewhere in their agreement.


Those backups are sent to Google and Apple, not to Meta.


They are E2E encrypted now in recent versions.


No, you have the option, disabled by default, to enable backup encryption. And there are two ways to enable it, one with a short password that when used Meta can always unencrypt the backup themselves and another with a 64 digit key, which AFAIK is unaudited.


"E2EE" -> "And during encryption we extract/share non-identifying personal information that we sell to ad networks (keywords or vectors from your conversations). But don't worry, there's no way to identify you based on this data."


I’ve seen this with telegram on iPhone. A few weeks ago I had a drastic argument with a female friend on telegram and immediately Instagram (meta) started showing me breakup memes and Rumi quotes. There were other recent examples. I don’t know what to make of it.



How do you know you have picked the topic at random? Maybe you've subconsciously seem adverts for that thing and that is why it is salient to you. Once you write it down, it becomes actually conscious and you start noticing the ads.


In my case i observe this was happening because of keyboard i was using (Gboard in my case), generally they keep track of all the words we are typing (for dictionary or suggestion purpose??) and that is used for ads targeting.


Both E2E encryption of messages and data processing for ads can happen at the same time. The app can do it on your phone and just send keywords to a server. In fact, this is the absolute best way for apps like these to do it.


I always wondered how is the business decision between charging a dollar per year for no information sharing vs some information sharing but free made. Does anyone know of a messaging app that uses such a model?


My wife and I play a similar game .. except we don't actually exchange messages, we just talk in private about something. Next we'll try just thinking about it, and see if that works too!


IMO keyboard apps, ISP, clipboard fiends are to be blamed for the most part


You and your wife can both install Signal and play the same game. Then you can discard that Facebook is snooping on your messages. And you can think of bigger conspiracies which is always fun.


It's probably not WhatsApp. It's maybe your keyboard. Gboard (default on many Android phones) for example is sending words by default back to Google if you do not turn it off.


Call me paranoid, but what about stuff that you just say when around your phone? This has happened to me a few times (ads showing up about relevant obscure topics), so I'm wondering.


I think that's paranoid. Looking at the state of speech recognition, they have a hard time even when actively listening to a clean source. Extending on this, they have more productive, lower hanging fruits than this[0].

[0] https://en.wikipedia.org/wiki/Targeted_advertising


Just uninstall the apps.


Uninstall Google too while you're at it. Click.


I was under the impression that Google keyboard uploads everything you type to the cloud for 'product improvement' purposes. Why do you think WhatsApp is to blame?


Use antitrust to break up the tech conglomerates into individual companies and ao avoid collusion of this sort?

There might still be collusion but it'd likely be far more transparent.


WhatsApp is end to end encrypted, but it can still detect what you are typing and can use that info to target you. It just can’t detect anything once you click send.


The message might be indeed end-to-end encrypted but the local WhatsApp installation could extract keywords as you type from the message and send them to Meta.


The android keyboard is also capable of reading everything you type, especially if you install a third party one.


Meta can still run on-device code to extract meaning from messages and pass on the metadata to the Ads backend.


> Is WhatsApp scanning personal messages to target their ads as we are noticing?

Three letters...begins with Y


I literally mentioned Vitamix in a single WhatsApp call and started getting Vitamix ads on IG. Note that I've never searched for a blender, and I don't really want one; this is the only occurrence of Vitamix in... years of my conversations.

So, I'm guessing that they not only read your messages, but also run TTS on your calls and serve relevant ads.


How are you coordinating the topics? Maybe by voice within earshot of a phone with Meta apps?


Easiest explanation would be your keyboard leaking data, what keyboard your wife and you use?


make something up.

send messages extolling the utility of brrlftz discuss how every body not taking advantage of brrlftz will miss out. let your SO know that you need as much brrlftz as can be produced and delivered.

keep your eye out for cheap imitations offered to you.


If it is all encrypted how comes the police always have access to whatsapp conversations?


By accessing the device. It is not only E2E encrypted, WhatsApp servers are only a middleman, as soon as they arrive (double-check) they will be deleted from the server. Undelivered messages are retained up to 30 days, according to their privacy policy:

<<Your Messages.

We do not retain your messages in the ordinary course of providing our Services to you. Instead, your messages are stored on your device and not typically stored on our servers. Once your messages are delivered, they are deleted from our servers. The following scenarios describe circumstances where we may store your messages in the course of delivering them:

Undelivered Messages. If a message cannot be delivered immediately (for example, if the recipient is offline), we keep it in encrypted form on our servers for up to 30 days as we try to deliver it. If a message is still undelivered after 30 days, we delete it.

Media Forwarding. When a user forwards media within a message, we store that media temporarily in encrypted form on our servers to aid in more efficient delivery of additional forwards.

We offer end-to-end encryption for our Services. End-to-end encryption means that your messages are encrypted to protect against us and third parties from reading them. Learn more about end-to-end encryption and how businesses communicate with you on WhatsApp.>>

https://www.whatsapp.com/legal/privacy-policy/?lang=en

Assuming they are compliant with their privacy policy, they don’t even have the messages.


> End-to-end encryption means that your messages are encrypted to protect against us

So, they say the protection is there once the encryption has been applied. They say nothing about what happens to the content before or after that on the end user's devices. That handling is however covered by other legitimate use clauses in the privacy statement. This covers keyword scanning for targetted ads (so a defence lawyer will say at some point.)


Yeah, that could be tested by reversing engineering and analyzing your network traffic to see if there are requests leaking keywords


It's hard to encrypt "all". "End to end" encrypted would mean that a message is encrypted while in transit. "At rest" encryption would mean that the plain text message is not saved on storage, just the cypher. Software however can behave in a way that all of these are true, yet, the plain text is readable by a third party. For example, the app can send the plain text after you decrypted it. Or the encryption could have multiple keys that can decrypt; you can have a private key for yourself, and the authorities could have a master private key that decrypts all. Your phone could have a middleman between the touchscreen and the application, and so, your "keystrokes" can be stored and sent somewhere, for example, by your keyboard application.


Your silly game is what watchdogs should be playing.

Sadly, they just never seem to be up to the task.


A lot of users backup the WhatsApp encrypted messages to google drive unencrypted.


Yeah but tough luck trying to prove it.


Fuel the monopoly, suffer the monopoly.


Sounds like confirmation bias?


Which keyboard are you using?


they are using gboard, so it's pretty clear nothing to do with whatsapp but google doing google things


Fascinating discussion.


What was the topic?


Have you considered the possibility that Facebook is listening to you via the chips Mark Zuckerberg added to the COVID vaccine that he forced you to take?

Just do your own research, man...


There is an app called TransferChain. https://transferchain.io They say everything is encrypted and no one can read or modify or sell data because it is heavily encyrpted and meta data are on blockchain.. It is a Blockchain project and seems very legit. Anyone has any comment about it? Could you guys give me a feedback pls. My Meta apps all read and analyze my data :( Thanksss.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: