So as always metaphors are a lousy way to reason, and quickly break down because once you are arguing that X is like Y you will end up hitting the limit of X being unlike Y anytime you try to deal with something complicated in regards to X.
Hence regulation of a medium where everything lasts forever and can be looked up instantaneously from almost anywhere, and copied and repeated all over the world by anyone wanting to do so, should not follow the model of regulation for a medium where everything is ephemeral and while theoretically open to anyone in practice not so open because availability of space is a limiting factor on proximity and proximity is needed to take part.
If you were voice chatting over Facebook they would probably have to make you aware if they were listening/processing the data for words. By make you aware, it probably means bury it in the ToS on page 784 in a generic and ambiguous way and that is sufficient for court should someone litigate.
Now when you start looking at voice assistants like Alexa, GA, etc. the waters could get muddier.
Until proven otherwise, I think it is absolutely fair to say WhatsApp messages are, indeed, encrypted end-to-end with no possibility for FB to read the contents.
So, to answer your question: encrypted by users, and the users have the keys. FB doesn't. WhatsApp is definitely good for most things private. Moreso than e.g. Facebook wall or messenger itself, as per TFA.
Sneaky fine print is one thing. Plain lying is another.
Facebook's history is a long string of pretty significant scandals that are forgotten a year later because they're not about X, which would be something much more serious and deserving of great reprieve...
> No matter how disillusioned you are with Facebook, directly and overtly lying to customers about encryption would be next level
This feels like an ethical line drawn on your own, and a technical distinction (which we, as technically-inclined people, are wont to make) that means little in the practical legal and sociopolitical frameworks of what constitutes a breach of contract and cause for punishment.
> Sneaky fine print is one thing. Plain lying is another.
Have you read the entirety of FB's fine print regarding its services, including WhatsApp? I haven't.
Instead of us going back and forth about this, perhaps a lawyer can weigh in: is there any way Facebook would survive a lawsuit if this were patently false? Meaning, they delibrerately put in a backdoor to the encryption and are reading messages, knowingly, as intimated in this thread.
And would this be "another day in the life", or would it be a transgression of new levels for Facebook?
It is my conviction that companies try and do what they can to stay within the confines of a hypothetical lawsuit. That's what legal departments are for, essentially. If this were a lie, I would be very, very interested in knowing how they got that document past legal. But perhaps a real lawyer can elucidate matters?
Of course, this made Facebook people extremely angry, and they had lots of arguing about it. In the end the man left (losing a lot of money in the process)and of course they can read your messages now.
They had specifically been working on that for something like a year or so. A team inside facebook was created just for that.
Messages are not encrypted by users, they are encrypted by a closed source application that facebook controls 100%. They just modify the software and force an update. It is not magic.
If you can write the source code you can do anything you want.
While that's true on one hand, it's also misleading on the other. When whatsapp was acquired in 2014, it wasn't e2e encrypted. Only by 2016 that it was utilizing full e2e encryption.
So while founders did care about privacy, for first 8 years of product life e2e encryption wasn't a thing. It's much easier to add very complex feature like e2e encryption once you have infinite amount of money, coming from facebook.
>In the end the man left (losing a lot of money in the process)
Not earning extra 100s millions of dollars, after you earned many billions, is not something easy to get a sympathy for. Amount of money they left on the table is mostly a rounding error for their bank account.
They didn't even have TLS in the early versions...
This is the theory. Now, in practice you can see exactly the same.
it's hard for me to think of something I would want to say that's too sensitive to send over sms but not too sensitive to send over an encrypted chat maintained by Facebook.
Let's say you have the source code. You have the source code audited. You have reproducible builds, so you know that the source code is what was used to generate that set of binaries.
How do you know that the platform vendor isn't substituting some other code at runtime? How do you know that the hardware doesn't have a back door? How do you know that the compiler isn't inserting malicious code into the app?
At the end of the day, the only way to prove that something is doing the right thing is to watch what it does. Everything else is an educated guess.
People have built third-party WhatsApp clients, even as Facebook has tried to combat them. What Whatsapp does has been observed to the point where people are able to interact with the server at various points in time. There have been vulnerabilities reported in WhatsApp; no software is perfect, and WhatsApp has made the choice to collect more metadata than, for instance, Signal.
But up until now, nobody has found any evidence that WhatsApp is not end-to-end encrypted as it claims to be. And there are ways for people to find that out, and a lot of incentives for people to do so. Nothing is perfect, and at some point you have to decide to trust someone. I understand that trusting Facebook is fraught. But there are people with good reputations who do good work, like Moxie and tptacek, who do have expertise and can recommend WhatsApp (with an important list of caveats). Whatsapp is not perfect, but most of the options out there are much, much worse.
user1 -- server -- user2
Does this mean user1 to user2 or user1 to server and user2 to server. Both of them are end to end when you define end as client(user1, user2) or server. Again, it was proven several times that real user to user encryption is very rare, I am only aware of SILC doing that. I do not know enough about WhatsApp to believe it is user to user and I do not trust Facebook with anything. You can prove me wrong though.
This is almost the opposite of how “expectation of privacy” is defined in the legal sense and used in court. Where did you get your idea from? Are you thinking of that term in a security context rather than legal?
For one, you’re defining the term with respect to the spies and not the public, and for two the idea is not whether someone could spy on you given the chance, the whole idea is that when you do something in private, you can reasonably expect it to stay private and that expectation should potentially be protected by law.
“Expectation of privacy must be reasonable, in the sense that society in general would recognize it as such”
Whereas, neither the private detective, nor the police, would be breaking the law by snooping on you in the public square. So either/both would feel empowered to do so, because society does not think such an action is worthy of punishment.
I've honest never understood how the expectation of privacy arose from using Facebook. You are literally sending your information to Facebook.
If anyone is at fault here, it's browser makers who enable sending your data to Facebook. Facebook isn't reaching into your computer "stealing" your data. It's recording what is being sent. Obviously talking about the web site here.
We'll put. Unfortunately in this case, iur legal and often social reasoning works exactly this way.
I think you've put your finger on the metaphor at the heart of this problem: the public square. I've had this thought several times when hearing/reading Jack Dorsey's response to all sorts of issues. We can't have a private owner and rulemaker for our public squares, without losing some of that metaphor's more meaningful implications.
I think there are two big issues at play. One is that public square issue. If that is appropriate to think of FB that way, I think the implication is exactly opposite of what FB execs want: you can't exclusively exploit it for private gain. A (metaphorical) public square is a political (literal) forum. The types of problems that come with a private public forum are exactly those demonstrated by the Cambridge Analytica scandal. The public forum gets auctioned off secretly to political bidders.
The second issue is another (possibly) metaphor broken by a shift in the underlying reality. "Expectation of privacy" means something totally different online. There's no expectation of privacy when you get in your car and drive to the store. But, if a new technology happens to give a company the ability to log your car's whereabouts at all times.
Not just that. Also selling the ability to remote control boom mics and remote control telephoto cameras, so they can send you an advertisement based on what you were looking at.
If you stand in the section of the public square next to Facebook and they listen in, then it's no different than standing by any other website. It's true FB is popular and present in a large portion of the square, but they're not entirely unavoidable in the public square.
It's also true that others can talk about you and share your information in the public square and Facebook will pick it up, but now we're dealing with control over your own information in the public square, which is a different topic altogether.
Most stalking laws apply to a single person who is targeting an individual, in the effort to harass, incite fear, or some other malevolent intent.
At least that's the German interpretation, afaik in the US, it's interpreted a bit differently where everything "in the public" has no reasonable expectation of being "private"?
Here is another one: social media is like a livley party where people can mingle in large and not do large rooms with different degrees of interaction with the totality of the guests. One can see that this is not a metaphor that FB would choose to discuss.
You mean like a mall? There's no expectation of privacy at the mall.
They directly create an expectation of privacy with their controls. The argument that they are making is outrageous.
It is not a metaphor. It is an analogy. It was used as comparison to the living room.
Knowing that, it is hard to follow your point.
Here's the quote I found; it seems to be referring to a particular tort. You may also want to look at the full text to see the citations I omitted in the [...] for brevity:
Intrusion upon seclusion. This tort requires intentional intrusion “upon the solitude or seclusion of another or his private affairs or concerns … if the intrusion would be highly offensive to a reasonable person.” [...] Plaintiffs have not alleged any intrusion into their private affairs; rather, the information at issue is all data they already shared with a broad circle of friends and even strangers (friends of friends). [...] Nor could the disclosure of information such as page likes, which are designed to be communicated to other people, be “highly offensive to a reasonable person.” Disclosure of far more private information, such as private medical records and the identity of undercover police, has been found insufficient. The “highly offensive standard … is reserved for truly exceptional cases of intrusion,” [...] and this is not such as case.
The important thing is not whether there's some pre-existing "expectation of privacy" in a domain (e.g. public square, or social media).
The important thing is whether (1) society wants (or deems beneficial) privacy in that domain and (2) privacy is achievable in that domain.
If (1) and (2) hold, there should be no talk of "expectation" and other such BS.
 and this can be settled with e.g. a referendum or a number of other ways.
The problem I see with your suggestion is then society must pro-actively make some sort of decision for each and every domain in advance, which isn’t possible. The goal of expectation of privacy is as a test to determine when privacy would be or has been violated on a case by case basis. In that sense it’s already somewhat doing what you’re talking about; it is a way to determine whether society deems privacy beneficial in a given domain.
Whether privacy is achievable does not make a good criteria at all, because it’s always possible to compromise privacy. The entire point of the “expectation of privacy” test is to determine whether someone who did violate another’s privacy should be allowed to, it’s a pre-cursor to determining right to privacy.
First, why not? It's very much possible, we have all kinds of systems in place, even electronic voting.
Second, privacy is a serious matter, of which there aren't tons, so it's not like "each and every domain" is equally important.
Third, it's not like a decision for all domains has to be taken simultaneously and right now. We could vote for one domain per year, and we'd have covered 10 most important domains in a decade...
>The goal of expectation of privacy is as a test to determine when privacy would be or has been violated on a case by case basis.
It's also used to mean "you wouldn't expect it in X domain, so you don't get to have it there".
And, as lots of domains where privacy is important are new, what's "expected" can go either way, like FB trying to argue that "people don't expect privacy in social media".
If we instead change it to whether people want privacy in social media, that's more clear cut, regardless to whatever someone can argue we "expect" or "not".
In fact, even if we keep it to "expect" (as the deciding factor), why shouldn't we vote on whether we do expect it or not? Why expect should be speculation?
Ask us (the people) to vote directly whether we expect it or not, don't decide for us, with some BS medieval-style theological syllogisms...
We can and do already vote on it, and laws establishing the right to privacy online already exist and more are in the process of being established, since historically speaking online domains have only just barely been invented.
For example, you already have no right to privacy from being photographed while outside in public in the US. You already do have an established right to privacy in your bedroom. Neither of those cases is subject to expectation of privacy tests, because the right to privacy has already been determined.
Personally it seems quite useful for expectation of privacy to always exist as a way to help sort out domains that have not yet been covered by law, and it would be silly to think that we understand all the future permutations and implications of online privacy mixed with big data well enough to cover it legally today.
Facebook’s defense here is using “expectation of privacy” in a subversive and sneaky way. The problem is Facebook has both kinds of data, public and private. It’s wrong to suggest that people don’t have an expectation of privacy for their hidden profile data just because they’re using social media. It’s probably right to suggest that anything I post publicly can’t be considered private, even if people want that. But that’s complicated when you data-mine all public data at once and draw potentially private implications using AI. It looks like their defense is about to be tested in court, so hopefully the court will see through Facebook’s defense. One of the outcomes of Facebook using “expectation of privacy” as their defense is that whatever happens here in court starts to become the law by precedent.
Facebook has been running huge ads all over media and streets with "your privacy choices" and "we protect your privacy" to salvage some of their reputation.
If they do this, and then at the same time claim there is no privacy on their platform, any judge who does not throw this out would be really dumb.
But even though people use many parts of Facebook less often, it's really hard to quit Facebook entirely because a lot of social activity revolves around it. In my circle, people tend to plan events using Facebook and often use Messenger to communicate. In university, people would ask questions in various FB groups. So while no one's really happy with Facebook, it still provides several conveniences that we don't want to live without.
Also, Instagram is still very popular - much of the activity that used to be on Facebook seems to have moved there.
there's not the level of interaction (and exposing one own opinion) that you'd see below a Facebook or Twitter post
Apart from that, when meeting people and continuing to meet them, I usually use the good old phone to communicate with them. The people I interact the most with on Facebook are at the same time mostly those I meet the least in the real world.
Shaving off an additional layer of complexity...
I also fear what FB and Zuck are capable of if they realise they're on a dead-end business model, and what abuses of personal data we might see as they attempt to cash out before everything dries up...
Society and regulators should treat the social media industry like we do with tobacco and firearms.
Zuck: Just ask.
Zuck: I have over 4,000 emails, pictures, addresses, SNS
[Redacted Friend's Name]: What? How'd you manage that one?
Zuck: People just submitted it.
Zuck: I don't know why.
Zuck: They "trust me"
Zuck: Dumb fucks.
... and therefore one should seriously consider not using Facebook.
If you've read the article, the context here is the cambridge analytica scandal, not someone posting on their public timeline.
Data you put on social media is basically public. It may not be easily discoverable by the public at large, but that lack of discoverability is not a perpetual guarantee at all.
But most participants on social media don't understand that. Maybe it's because they overestimate the confidentiality of social circles. Even offline that's not much of a protection.
So you actually are conceding the point. You agree that "most participants" actually do have an expectation of privacy.
In my opinion it is unreasonable to expect friends not to share private information about you. And I see a lot of evidence, both in my life and from observation of other people, that people consistently underestimate the probability of friends breaking that trust.
That's an individuals own risk, and not the same as whether Facebook should treat them as private, and don't sell them, expose them, etc.
Sure, I should be cautious of what I say in social media, because e.g. the other side could leak it, the servers could be hacked and the contents exposed, etc.
The same holds for my snail mail or what I type in my laptop. But I still expect those to be kept private (and even more importantly, want them to be kept so, and want the law to enforce that).
Just because there's no "perpetual guarantee" doesn't mean there shouldn't be company closing fines for selling your data, or showing negligence in making them private (e.g. unencrypted passwords come to mind) etc.
In the case of privacy-endangering malpractice like unencrypted passwords, that is already happening through GDPR for example. It is also, however, affected by the contract the individual has with the company.
I don't think that social media interactions, especially on facebook, fall under that umbrella, in general. Private chats, maybe. But even there you know that friends may share the texts or get hacked, and the facebook system monitors such communication for certain things like child pornography, possibly resulting in Human operators reading your communication.
Much of what facebook does, and what we actually want it to do for us, is not possible with a reasonable expectation of privacy. If I put a photo on facebook, I know that the whole world has access.
For example I never understood how people would want to send intimate pictures to someone else (even in the case of on-paper or video phones) and expect them not to share it.
People are uploading thing to the internet for the sole purpose of having as many people as possible see it. That’s the entire goal of the exercise.
It’s baffling that those same people would then turn around and behave as though they are surprised that people can see those things.
Is it ok for Amazon to sell or use them? How about using them to blackmail me ? Would that be ok?
So if instead of uploading that photo to S3 and marking it private, you uploaded it to Facebook and clicked the "Show This Photo To Everybody In The World" button, it would be OK for Facebook to show it to everybody in the world. It would be impossible to blackmail you with it, given that there is nobody left without the ability to see it, at your request.
I personally have lots of photos up on Facebook that anybody in the world can look at if they choose. I also have lots of photos that nobody but me and my family can look at unless I show them personally. I accomplished this by not uploading photos from that second group to Facebook.
What I'm parsing out of your comments is that no matter what Facebook says we should expect it to act without regards to privacy?
"But if [something other than what I said], then [something different would happen]."
Yes. I agree with your conclusions about the things I didn't say. But I guess I'll have to ask that you stop posting them as though they were responses to (or related in some way to) things I did say.
On social media, we share (and over share), and yet we are shocked at our privacy being exposed. We should not expect it.
If we talk quietly to each other in the privacy in our homes, sharing that information is a breaking of social trust.
If we talk on the bus around others, we should't be too surprised if someone hears us.
If we share what we say to all, we should absolutely not be shocked. Especially when we are explicitly sharing on purpose.