Someone on Slashdot asked the key question: If an individual did this, they would be prosecuted for hacking under one law or another. Why is Facebook not facing criminal charges?
No they wouldn't. Facebook had the users consent to read that traffic.
Plenty of people liked to use a VPN that pays you to use it. It was great. They knew that they were giving up their privacy to a big corporation, but chose to do that because they were getting paid.
Today's big corporations slurp up your data and don't pay you.
Whatever their contract with the phone's user says, it many states they also need the consent of the other party of that conversation, which not always will have any agreement with Meta as they may not be a Facebook user. If they don't have consent of both parties in 100% cases where it's required and there is even a single conversation they wiretapped illegally, that would be a crime.
Snapchat isn't a phone or in-person voice conversation. State two-party recording consent rules don't apply.
If you work for a large employer in the United States, they are probably decrypting all of your TLS traffic to every site on the internet from your (their) work devices without the consent of the other party. This is a part of any good security architecture. There is no other way to ensure it isn't actually malware command and control traffic, for example.
You consent to this by using the employer-provided device and only your consent is needed. Same deal here. I am sure Snapchat doesn't like it (I think some of Meta's internal messaging around it said as much) but I can't think of how it would be illegal in the U.S., though I can't speak for every jurisdiction in the world. Not a lawyer, though.
> “I can’t think of a good argument for why this is okay. No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works,” Canahuati wrote in an email, included in the court documents.
> They knew that they were giving up their privacy to a big corporation
Sure they did. I’m sure they didn’t just click “I agree” or ticked the boxes to “Accept the Privacy Policy” and “Accept Terms of Service” without really reading and understanding all the legalese. I mean, who would do that, especially as they’re getting paid? We all know the average user has an academic understanding of how their data is collected and its consequences. And it was Facebook doing it, so it’s a given there were zero dark patterns or trickery involved. This is absolutely the user’s fault, all hail corporations doing whatever shit they want.
> Facebook had the users consent to read that traffic.
Are you assuming that Facebook only read traffic between two mutually-consenting parties? I assume Snapchat did not consent to this. What if you're in a state that requires two-party consent?
> Are you assuming that Facebook only read traffic between two mutually-consenting parties? I assume Snapchat did not consent to this. What if you're in a state that requires two-party consent?
I think that just applies to phone calls or conversations, not all internet traffic, unless you think that using wireshark or the chrome developer tools to save network traffic / http requests is a crime?
Clearly, they did not. California is an all party consent state. There is no reason to build a VPN snooping system if you have consent of all parties; Snap Inc could've just straight up given them usage information and/or direct access to monitor behavioral data and read conversations in the app if everyone consented. That would have been much cheaper and simpler if everyone consents. The entire point of building this system is that Snap Inc, at a minimum, did NOT consent. Additionally, I find it very likely that the other participants in conversations on the platform who were monitored did not consent either. The user who installed the app may have consented, but that is not good enough in California. ALL parties must consent.
California is the one state where this argument might have a chance, because both Meta and Snapchat are there and maybe some of the users Meta talked into this are too. However I wouldn't put the chance anywhere near 100%:
Also another article I read says the CIPA has a civil statute $5,000 per violation, so if you're Meta, who cares. Not exactly the federal Computer Fraud and Abuse Act.
Honest question, as I want to learn something. Zuck's quotes were:
> “Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted we have no analytics about them”
> “Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them. Perhaps we need to do panels or write custom software. You should figure out how to do this.”
I'm against what FB did, and it is still Zuck's responsibility, but let's take a fake company as an example. Say I'm your manager, and I ask you to start learning more about what our competition is doing. If you used publicly available info (like news articles or other media, even their engineering blog), OK great -- but if you decided to snoop their traffic, which only went between that competitor's servers and their users, I wouldn't be OK with that. Perhaps you did something unethical, but also I'd be responsible bc I hired you.
How do you prevent this sort of thing, especially at a huge company? Especially a huge tech company, where I believe lots of tech workers tend to specialize in tech and often lack soft skills/ability to think about the bigger non-technical picture. For example, "oh shit, we can't actually read this data w/o the customer's consent."
You answered your own question. If you are a manager, and you asked someone to get this information. You have enough oversight to know if the information was obtained through ethical means or not.
At FB scale, if Zuck wanted this information, he has enough staff to flag unethical snooping of users. They did it because they wanted to do it even if it's not ethical, not because they didn't realize it's unethical.
If you are the direct manager, you should ask plenty of questions. If you are the IC, be prepared to answer plenty of questions.
If you are a skip-manager, you need to build a culture where being ethically responsible is the right thing to do. This isn't easy to do because you quickly run out of time to talk to all of your employees and figure out what everyone is doing.
> How do you prevent this sort of thing, especially at a huge company?
You don't have to when you're a huge American company. No one is risking prison time, and the fines are a drop in the bucket considering FB's profitability.
Exactly. Here's a great example of them criticizing NYU for researching political influence & advertising on the Meta platform:
"Research Cannot Be the Justification for Compromising People’s Privacy"
https://about.fb.com/news/2021/08/research-cannot-be-the-jus...
I apologize for it, as it is, I believe, indirectly against the HN guidelines. Even more so because sarcasm as a form of humour is not necessarily understood as such in every culture, which can lead to misunderstandings.. (From experience, it doesn't go down well in Germany, for instance....)
You're assuming the engineers that do this care on a fundamental level, but that there must exist other factors which coerce them into doing it anyway.
That's a comforting thought. It rationalizes the harmful behaviour. But it's also arrogant in a way, because you're projecting your own moral values and assuming that other individuals must share those too.
The simpler explanation, but also one that's much more difficult to understand, is that some number of humans simply don't care about, and may not even be capable of understanding and internalizing in any real way, the consequences that their actions have for other people.
And sometimes, a system built around apathy as a value weeds out all the ones that do care, or are even able to care. The question in those cases isn't "Why not speak up? What are you afraid of?"; it's "Why say anything? How does that profit me personally?".
I believe Zuckerberg when he says he doesn't know why [1] people trust him. I think it, and most of the harm that Facebook does, comes from a genuine place of not understanding. Amoral actions don't require the presence of some willful evil, nor of some coercive force; they just require the absence of sufficient empathy and self-awareness.
> You're assuming the engineers that do this care on a fundamental level, but that there must exist other factors which coerce them into doing it anyway.
FAANG salaries. prestige of FAANG work.
work devolved into L1 and L2 requirements that can, with a bit of effort, obfuscate what a feature does. "hey Jim, did you knock out those 8 L2s? we gotta ship end of the month" -- and it just becomes another feature add.
Reminds me of the Chinese hacking company leak a few weeks ago.
Hack-a-Nation-State-as-a-Service and Hack-a-Service-as-a-Nation-State. The most predictable and boring dystopia. Like was anyone surprised Facebook was doing this?
I don't understand how FB can decrypt other apps network traffic. TLS will fail if they tried a man in the middle attack even if they had VPN software installed on the device.
> Facebook's IAAP Program used nation-state-level hacking technology developed by the company's Onavo team, in which Facebook paid contractors (including teens) to designate Facebook a trusted "root" Certificate Authority on their mobile devices, then generated fake digital certificates to redirect secure Snapchat analytics traffic (and later, analytics from YouTube and Amazon) from Snapchat's servers to Onavo's; decrypted these analytics and used them for competitive gain, including to inform Facebook's product strategy; reencrypted them; and sent them up to Snapchat's servers as though it came straight from Snapchat's app, with Facebook's Social Advertising competitor none the wiser.
How is that "hacking"? The test subjects willingly modified their CA store, nothing can be made against that, it's users choosing who should receive data sent by their device (I mean certificate pinning could help).
Reminds me of TV users willingly plugging a box into their TV that sends usage data to a statistics institute
In the same way a patient might willingly consent to some highly specific chemical treatment being replaced with another highly specific chemical - they have no idea, they rely on the doctor not to purposefully mislead them. You could put anything in those forms and most people would trust the professional is at least not allowed to defraud them.
> The test subjects willingly modified their CA store, …
I don’t think users knew what they were consenting to. Only a small percentage of the population know what a Certificate Authority is. And also “monitoring traffic” doesn’t carry the weight of “we are going to listen to your private conversations”.
“I can’t think of a good argument for why this is okay. No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works,” Canahuati wrote in an email, included in the court documents.
Certificate pinning would defeat this. Bake it into your app that you only trust a specific certificate regardless of what is in the system trust store.
The TC article leaves that a little unclear: were they actually looking at the plaintext or just gathering metrics about snapchat usage? The latter wouldn't require decrypting the session.
If Onavo did install a certificate and MITM the connections and send private user data to Meta... that's beyond the pale. That's far more worthy of a cover story than Bloomberg's debunked secretive tiny chips story from a few years ago. It's equally as bad if not worse.
Seems pretty clear that they could decrypt the traffic they were interested in, they also talk about 5 years of retention of all traffic that they can decrypt at anytime. Sound familiar?
> In 2016, Facebook launched a secret project designed to intercept and decrypt the network traffic between people using Snapchat’s app and its servers.
I read the rest of the article as well, and saw only confirmation:
> Given that Snapchat encrypted the traffic between the app and its servers, this network analysis technique was not going to be effective. This is why Facebook engineers proposed using Onavo, which when activated had the advantage of reading all of the device’s network traffic before it got encrypted and sent over the internet.
Where do you see the ambiguity? Other than the weasel words about proposing these programs (versus actually running them), it seems clear that they were decrypting the traffic (or reading it before it was encrypted). Did I miss a piece?
> This is why Facebook engineers proposed using Onavo, which when activated had the advantage of reading all of the device’s network traffic before it got encrypted and sent over the internet.
This doesn't make sense, they wouldn't see the traffic before it was encrypted. They would see it encrypted, but using the MITM certificate instead of Snapchat's. Given the inaccuracies in the article, it makes me wonder what else they got wrong.
Using a VPN client to monitor how much, when, and where traffic is going is bad, but MITM'ing a user's connection is much, much worse. I'm really skeptical that's what happened, especially given TC's inability to articulate accurately what Facebook did.
> Onavo [...] would collect the “Time you spend using apps, mobile and Wi-Fi data you use per app, the websites you visit, and your country, device and network type.”
That's the former type of collection I was talking about. There's no evidence I can find that they installed a root CA certificate and were MITM'ing connections. That's a major accusation and one that is not accurate as far as I can tell.
The secret was revealed in 2017 [1]. I guess there are new documents, though?
Facebook knew about Snap's struggles months before the public (Engadget)
> To be clear, Facebook isn't grabbing this data behind anyone's back. The company says Onavo Protect is explicit about what info it's collecting and how it's used, and that apps have incorporated market research services like this "for years." The odds are slim that many people read these disclosures before using Protect, but anyone who was concerned could have found them. The revelation here is more about how Facebook uses that information rather than the collection itself.
"The researchers [at NYU's Ad Observatory] gathered data by creating a browser extension that was programmed to evade our detection systems and scrape data such as usernames, ads, links to user profiles and “Why am I seeing this ad?” information, some of which is not publicly-viewable on Facebook. The extension also collected data about Facebook users who did not install it or consent to the collection. The researchers had previously archived this information in a now offline, publicly-available database. "
The immorality of Facebook has no limits. Censorship? Check. Espionage? Check. Profiling? Check. Propaganda? Double check. Dictators of the world are envy of Mark ;)