I believe, a reasonable person, would expect the limits of the access grant to end with the purpose of the app (a quiz) and not extend any further.
Most contracts/agreements have limits and this one is implicit in the working of the app. It is in no way clear that the app is earning it's money by selling/manipulating the data you're giving access to.
Additionally, it's disingenuous to ignore that a subversive app will make up/add features in no way benefiting the end user but for the sole purpose of scraping messages.
Intent is important, for some of these apps it is clear the only intent was to harvest data en masse.
Yes the user agreed to grant some permissions so a quiz could be taken. It cannot and should not be ignored that the app developer purposefully did not disclose the true nature of the app. This wasn't an accident. They didn't trip and fall into the data. They made something appear benign when it was really cancerous.
Perhaps the personality quiz example that seems to be the cornerstone of this whole CA story is different: I'm not sure what your expectations would be allowing such an app access to your messages would be. However, there are quite a few examples of strange behaviour of software running on phones that it's pretty clear that isn't what you had in mind when you opted in. An example that comes to mind is Facebook apps activating your microphone to log conversations with people you have in person (I'm not sure there is any proof of this one yet but would anyone be surprised?): when you opt in to microphone use it's so that you can make calls on the app not to invite spying. "But you opted in!" Yeah, okay. "Sorry!" - Zuckerburg
I feel like private messages are something that should never be granted to third party apps on Facebook. People just click allow because they assume whatever is listed, is being used for the purpose of the application.
You had to authorize the messages permission separately from all other permissions. If you didn't want your messages accessed, you simply declined that permission. This one's on the user, not on anybody else.
The Obama For America app stole data from about 4x the number of profiles that this app did - about 200 million people, less than 1 million of whom gave them any form of consent, much less informed consent, as was given here with the messages permission. Do you have an issue with that app’s title?
Everybody (and Facebook best of all) knows that users will click [Accept] or [I've read the 27 page T&Cs] or [I hereby give Cambridge Analytica my firstborn child] without even slowing down to consider any consequences - in return for another tiny dopamine fix.
Users are only being asked for permission to their "messages", are given no further information, and can not proceed if they do not submit to the request.
If you're a good hearted person, you assume that an app needs to use your messages for one specific purpose, much like some apps ask permission to your text messages just so they can automatically validate incoming sms code requests.
So permission to access your messages could mean "we just need to do one thing" or it could mean "we're going to siphon up all of your private data"
The generically worded option preys on the user to be trusting enough to give the other party the benefit of the doubt, which is not how the system should be handled and really is the entire focus of the GDPR being passed in Europe right now to prevent specifically this sort of thing from happening.
Waving 'user consent' around as a reason to do nothing is the wrong approach.
All people should have an expectation of privacy that when they send a private message to someone, that someone does not have the right to disclose the conversation.
We're getting a little ridiculous here. There are valid complaints against Facebook in all of this - this just doesn't happen to be one of them.
Facebook makes a lot of noise about privacy (using their own definition, naturally, being 'preventing anyone outside Facebook from seeing your data') so it's reasonable for a nontechnical user to not expect that an acquaintance installing a "what's your pirate name arrr" app should lead to their 'share with friends only' info being acquired and sold by a data broker.
The difference is that email forwarding is part of the RFCs and IETF standards that define email, e.g.:
Whereas with Facebook, while one or more parties might be bound by some agreement with the company, that doesn't create any clear agreement between users.
If you have a problem with usage implying consent to a TOS then you can make that argument, but this one is flimsy.
They see a "forward" button in their email client. Just like Facebook Messenger has a "forward" button on messages.
No one is expecting a different level of privacy in Facebook Messenger over email because of the IETF.
Edited to add: Granting for instance access to your contacts fro an app is controversial from a legal perspective in Europe. Data protection agencies at least in Germany have the opinion that you need opt-in consent from everybody in the address book to do that.
No. I would impose a ban on selling our conversations for money or for services.
Edit: to clarify, I believe users on Facebook were bartering with data that wasn't theirs for access to Facebook's services.
How does this world view reconcile with the fact that many different states have 2 party consent? Or the fact that considerable money and effort have been spent on concepts like OTR?
People expect others to have some discretion in what private information they pass on to others. Most applications have this assumptions built in mind.
You can easily forward a single email to someone in gmail, but there is not 1-click solution to send all of your emails to someone else.
That’s actually wrong. Right now, today, you can write apps for the Gmail platform that could have this ability as soon as someone authorizes it . So everyone is down on Facebook, who disabled this ability 4 years ago. What about Google?
This isn't your nana forwarding chain mail.
So perhaps it's not an expectation of privacy, but more of an expectation that your private messages won't be distributed by third parties for profit.
The world doesn't just give you privacy. If you want any, you have to work at least a little bit for it.
> "The court here has taken hold, embraced the challenge of the technology and caused the law to catch up with the technology, in line with Canadians' values," Lonsdale said. "Canadians can expect that their private, one-on-one communications with each other remain ones that they have a reasonable expectation of privacy."
I think the new CONSENT Act, by one of the authors of the child-privacy law COPPA, looks quite promising to make these things much clearer for users.
I watched the hearing and Zuckerberg kept evading questions about users "controlling data" responding with nonsense such as "Of course users control the data - they decide what to post on Facebook!"
Completely omitting the issue about what happens to their data once third-party developers have access to it, even in the context of sharing something with friends (and not "the public"), or in this case private messages.
And I agree with others here. Private messages should have never even been as part of a permission. I think now I understand why Facebook recently said that they see Messenger chat as "public". I thought it was just more of a metaphor, but they seem to be thinking of that quite literally. Your private messages on FB are no different than your public profile data to both Facebook and third-party developers.
Oh and by the way, Facebook also recently admitted that most of its 2 billion users' profile data, including email and phone numbers, have been scraped off the website.
I'm not sure if we have enough details to say if this was informed consent or not.
But I doubt it was informed consent if the messages were gathered by the Facebook app. Back when I had that app installed, Android did not support granular privacy. One day I got an update notification that said a new update would get the permission to read text messages. I believe the rational was so the app could automatically parse a text message confirmation code. Since I'm savvy, I never installed that update or any other update it ever again. Now it turns out Facebook used that permission to slurp up people's text messages into it profile of them.
This wasn't informed consent: I wasn't honestly told what they would use the permission for, the request wasn't clear (just the standard android notice), and it was an all or nothing ask.
For example, reddit’s API allows applications to access messages, which is useful for third party clients.
If someone at a gas station asks me "hey could you give me $5 so I can buy some gas. I'm out of cash." and I give them the money. Then they proceed to fill up a can and go burn down a house. Should I be arrested as an accomplice?
There was no indication that they would do that and if I knew there's no chance I would agree to give them anything.
You'd have to find additional evidence that I consented to that action. In this case, I really doubt anyone knew that their data would be used in this way when all they wanted to do is take a quiz.
It is disheartening to see people label it clickbait, this is a very important message that needs to be embedded in the public.
> A research company (GSR) licensed the data to us, which they legally obtained via a tool provided by Facebook. Hundreds of data firms have utilized Facebook data in a similar fashion.
Considering we were a tiny defense contractor who didn't get a ton of contracts, I have to assume this happened in some way or form.
"Because of an editing error, an earlier version of a headline with this article misidentified the entity that improperly harvested the personal information of up to 87 million Facebook users. The information was collected though a quiz app developed by the researcher Aleskandr Kogan, not by the consulting firm Cambridge Analytica."
"It is not clear whether the direct messages were among the data eventually provided to Cambridge Analytica. In an interview on Tuesday, Mr. Kogan told The Times that the private messages were harvested from a limited number of people, likely “a couple thousand,” as part of a separate academic research project and never provided to Cambridge Analytica."
"Aleksandr Kogan, a Moldovan-born researcher from Cambridge University, admits harvesting the personal details of 30 million Facebook users via a personality app he developed.
He then passed the data to Cambridge Analytica who assured him this was legal, he said. "
The part I don't understand about all of this is that, yes somewhere along the line someone made the decision that giving developers access to that data was ok...but they REMOVED access to it as soon as they realized what sharing that data meant. That's how startups are trained to behave, shoot first ask questions later.
They made a mistake and corrected on their own, what more is there to ask? I guess it's more of an issue because of the nature of their business?
Did that unleak the data? No. It was a mistake, but it required a cleanup which wasn't achieved and wasn't made public.
Ed Felten coined the phrase "Exxon Valdez of privacy" over a decade ago in anticipation of this: https://freedom-to-tinker.com/2006/06/12/exxon-valdez-privac...
"The company's growth spurt has spawned a host of daunting questions that no data-retrieval system can easily answer. Should Google play ball with repressive foreign governments? Refuse to link users to "hate" sites? Punish marketers who artificially inflate site rankings? Fight the Church of Scientology's attempts to silence critics? And what to do about the cache, Google's archive of previously indexed pages? "
Whenever a colossal disaster happens, there is nearly always someone who was telling anyone who would listen what the problem was years in advance.
Felten is not as famous as Schneier but just as important, and has been doing lots of important work such as fighting against electronic voting systems.
(1) Make an ernest attempt to use ML, algorithms to identify their customers who are using those leaked datasets Facebook negligently exposed and help devalue the data, instead of eagerly selling them targeted advertising services?
I don't know if they did this, but it sure seems doubtful.
(2) Quickly and openly disclose the extent of the leaked data
(3) Stop using manipulative and deliberately opaque TOS to enable ever more data collection
I'm not being sarcastic or insincere, this is my honest opinion of that they could have done. I am continually surprised at how many people making $$$ in ad-tech/PII data mining and brokering seem niave to the fact that this type of behavior would inevitably result in exponential growth of user outrage
How could they do that? The cat is out of the bag and FB aren't going to have any knowledge about where that data is now. Have there been reports of it getting out from CA?
> (2) Quickly and openly disclose the extent of the leaked data
I think some caution is a good idea, they don't want to get the numbers wrong - although they are making steps in the right direction with the message to 87 million on their news feeds.
>I think some caution is a good idea, they don't want to get the numbers wrong - although they are making steps in the right direction with the message to 87 million on their news feeds.
Totally agree with the second part, but ~4 years (only divulging the info when forced to during PR damage control mode) is well past being cautious. It's being cautious with the amount of damage the disclosure does to your profits, Equifax doesn't even wait that long.
The real number is north of 2 billion.
"Malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery."
"Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way."
I think the outrageous part is that app developers could get data of their users' friends - those are the people that never used the app or agreed to its permissions.
Either it is campaign contribution or perhaps without understanding technology, most are afraid what Mark can learn about them running queries on their personal messengers.
Never ever seen them so nice, including questions if he wants a break now to which crowd laughed; they treated him like he is 8 years old. Incredible! No wonder he shook his head smiling for more questions!
If it wasn't for recent changes to authorisations being suspended after a period of time these tokens could be seemingly worth something to the right person.
The root problem being, average users don't know what they're giving access to and know why its important to be critical of such access.
"Version 1.0 of the Graph API launched on April 21, 2010. It was deprecated in April 2014 and closed completely to legacy apps (ie, existing apps that used the API before April 2014) on April 30, 2015."
Other third party applications had that ability as well.
It's undeniable too many people from the SV tech community are deeply vested in the business either as workers of Google, Facebook or part of the large multi billion dollar ad ecosystem.
The fact is tech people made pompous claims about liberty and freedom and then sold out the moment they had personal gain. This is not their fault, the history of ethical posturing consistently plays out this way and capitalism incentives leaves little room for ethics making regulation the only workable option.
The default reaction is thus to minimize accountability, blame others, apologism, hand waving or deny the issue altogether. Every discussion gets mired in the same basic first principles of freedom and privacy.
If you can't behave ethically you have no basis to expect ethical behavior from others, or for an ethical society. You make the bed, you lie in it.