Hacker News new | past | comments | ask | show | jobs | submit login

> Perhaps the ads came from some other similar search queries. Perhaps they came from the keyboard intercepting what was typed. Or perhaps something else that I can't think of. But I'm nearly certain it did not come from meta intercepting the contents of your messages.

Isn't this kind of splitting hairs? Does it matter if text information came from a "side channel"?

It seems like the promise Facebook makes is that 'your communication using whats app is secure,' that's certainly my interpretation of what "end to end encrypted" means. It is a promise of security. That means text is sacred and even text sent to giphy should be privileged from the ad machine.

The question being asked here is not "is it end to end encrypted?" It's "are my communications secure?" End to end encryption is just one element of that security.




The thing is if it's a 3rd-party Android keyboard or similar that logs your messages then there's nothing Meta can do about this.


That's still Facebook's problem, no excuses. Facebook absolutely has the power and resources to lobby google and congress. Security teams at both companies will unequivocally agree that keylogging presents an extremely grave security risk that consumers are unlikely to understand the consequences of and therefore need to be protected from.

Imagine a hapless military professional/politician downloading one.

The problem is one of alignment. Facebook wants to monetize whatsapp and wants the whatsapp data. That's why there was a mass exodus to signal in the first place. Facebook was weakening the protections of the app.

Due to the alignment problem Facebook can't advertise whatsapp as the secure and private choice because they are actively working to make it less secure and private. That's why Brian Acton quit (leaving $$$$$$$ behind) in the first place.


I don’t agree that it is Facebook’s problem but I do think this is probably where a lot of data gets leaked that people don’t realize or think about.

In a perfect world sure Facebook has the power and money to do a lot of things. So do the other megacorps. They don’t do them, and you’re correct it is the misalignment of incentives to due so.

But Facebook doesn’t control what keyboard you use on your phone and if the keyboard is sending every message you type somewhere, they can’t do anything about that and they aren’t lying that they can’t read your messages.

Whether or not you believe that they do in fact harvest the message data is up to you. But certainly people using keyboards that harvest data is very plausible to me as a vector for this stuff.


In the other post in this thread, I link to a website that ostensibly has a method of warning for non standard keyboards. If "e2e" communication is part of a products marketing, do you think they have a responsibility to warn when that expectation might be violated? What about warning that text sent to giphy may be used for advertisement purposes?

If I were to summarize my entire thoughts on WhatsApp, it's that it advertises security (e2e), while they only make money from violations of the security. The behavior OP expects is exactly the behavior a person would expect from this set of alignments.

If a leak is able to be monetized (even if it is google harvesting keyboard data and selling it back to FB) do you think that would be punished or rewarded?

If this very same post were for signal, I think the response we might expect is concern and investigation, not a response of defense and deflection.

There was an article several weeks ago about how a "special master" tasked with understanding what data Facebook collects on you was stonewalled because "even Facebook don't know what data Facebook collects."

https://news.ycombinator.com/item?id=32750059

"we don't want to be accountable for any data except the data that's part of the download your data":

> Facebook contended that any data not included in this set was outside the scope of the lawsuit, ignoring the vast quantities of information the company generates through inferences, outside partnerships, and other nonpublic analysis of our habits — parts of the social media site’s inner workings that are obscure to consumers. Briefly, what we think of as “Facebook” is in fact a composite of specialized programs that work together when we upload videos, share photos, or get targeted with advertising. The social network wanted to keep data storage in those nonconsumer parts of Facebook out of court.

> Facebook’s stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level.

> The remarks in the hearing echo those found in an internal document leaked to Motherboard earlier this year detailing how the internal engineering dysfunction at Meta, which owns Facebook and Instagram, makes compliance with data privacy laws an impossibility.

Facebook doesn't even want to know if the WhatsApp is leaking data.


That 3rd-party keyboard would also be able to log your Signal messages, so I don't get your point.


If the original post is true and Facebook is leaking message based data into systems that produce ads (3rd party or 1st party), they have a responsibility to diagnose and resolve the issue. Despite their responsibility to do so, they are not aligned with doing so.

Excuses like "the user did something bad" aren't productive.

A warning that the users expectations (secure communications) do not match reality (3rd party keylogged communications) seems like the minimum level of responsibility:

https://maheshikapiumi.medium.com/allowance-of-third-party-k...

If WhatsApp derived information is being seen in advertisements, it is Facebook's responsibility. It is in Facebooks best (next quarters profits based) interests to not be responsible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: