Hacker News new | comments | ask | show | jobs | submit login
Facebook Data Collected by Quiz App Included Private Messages (nytimes.com)
342 points by sethbannon 10 months ago | hide | past | web | favorite | 127 comments

Why is it surprising or even remotely controversial that an app that people explicitly authorized to access their messages (informed consent) then proceeded to access their messages? The app didn't have access to friends' messages, so I don't see what the issue is here, other than yet another clickbait headline.

For people I've discussed with, the surprising part is that they can do anything with it. You and I know that (currently) if someone has the data, they'll scrape/manipulate it at will.

I believe, a reasonable person, would expect the limits of the access grant to end with the purpose of the app (a quiz) and not extend any further.

Most contracts/agreements have limits and this one is implicit in the working of the app. It is in no way clear that the app is earning it's money by selling/manipulating the data you're giving access to.

Additionally, it's disingenuous to ignore that a subversive app will make up/add features in no way benefiting the end user but for the sole purpose of scraping messages.

Intent is important, for some of these apps it is clear the only intent was to harvest data en masse.

Yes the user agreed to grant some permissions so a quiz could be taken. It cannot and should not be ignored that the app developer purposefully did not disclose the true nature of the app. This wasn't an accident. They didn't trip and fall into the data. They made something appear benign when it was really cancerous.

I think this is a concept that I find myself perplexed about. The defense of a lot of the use of acquired data is, "but you opted in!", which is of course true. However, when we give software access to things it's usually expected that the access is to conduct tasks pertaining to why we would want that software in the first place. For example when I use a web browser I give it internet access so that I can browse the web, or whatever. Not so that it can phone home and tell a company what websites I've been browsing. I feel like the defense, "yeah, but you opted in to allowing it access to the internet!" is a bit misleading.

Perhaps the personality quiz example that seems to be the cornerstone of this whole CA story is different: I'm not sure what your expectations would be allowing such an app access to your messages would be. However, there are quite a few examples of strange behaviour of software running on phones that it's pretty clear that isn't what you had in mind when you opted in. An example that comes to mind is Facebook apps activating your microphone to log conversations with people you have in person (I'm not sure there is any proof of this one yet but would anyone be surprised?): when you opt in to microphone use it's so that you can make calls on the app not to invite spying. "But you opted in!" Yeah, okay. "Sorry!" - Zuckerburg

Another thing worth thinking about is terms of service for websites. Most people don't read them, they just click accept because it's required to use the website. Most assume it's just standard legal speak that most sites use.

I feel like private messages are something that should never be granted to third party apps on Facebook. People just click allow because they assume whatever is listed, is being used for the purpose of the application.

I mean, just because I give the keys of my apartment to an electrician for a day doesn't mean they can use my apartment as they please and host a party for all their friends in the afternoon before I get back.

Yeah, but you just signed a contract that allows him to host a party in your apartment. So it means exactly that.

>Yes the user agreed to grant some permissions so a quiz could be taken.

You had to authorize the messages permission separately from all other permissions. If you didn't want your messages accessed, you simply declined that permission. This one's on the user, not on anybody else.

You're ignoring the 'reasonable person' argument which is important and forms a significant part of the legal system re: understanding contracts that you are signing. The reaction by the public and HN as a whole is perfectly justified. The permissions dialog is less interesting than the title of the app which wasn't "App to steal and repackage and sell your data you idiots!". In net, the app didn't tell the truth about itself, aided and abetted by Facebook's idiocy.

The permissions dialog is less interesting than the title of the app which wasn't "App to steal and repackage and sell your data you idiots!".

The Obama For America app stole data from about 4x the number of profiles that this app did - about 200 million people, less than 1 million of whom gave them any form of consent, much less informed consent, as was given here with the messages permission. Do you have an issue with that app’s title?

Where did OP claim they didn’t have an issue with the Obama for America app?

I strongly disagree - this one is on Facebook. For building a system that even allowed a "quiz" to ask for message permission in the first place.

Everybody (and Facebook best of all) knows that users will click [Accept] or [I've read the 27 page T&Cs] or [I hereby give Cambridge Analytica my firstborn child] without even slowing down to consider any consequences - in return for another tiny dopamine fix.

Sounds like both sides got what they wanted.

You are willfully ignoring the strongest point in my statement.

When it comes to permissions and electronic privacy, consumers don't understand how things work, so they aren't really giving consent. And many aspects of the modern Internet are designed to trick them.

I think it's worth emphasizing how much people are being tricked.

Users are only being asked for permission to their "messages", are given no further information, and can not proceed if they do not submit to the request.

If you're a good hearted person, you assume that an app needs to use your messages for one specific purpose, much like some apps ask permission to your text messages just so they can automatically validate incoming sms code requests.


So permission to access your messages could mean "we just need to do one thing" or it could mean "we're going to siphon up all of your private data"

The generically worded option preys on the user to be trusting enough to give the other party the benefit of the doubt, which is not how the system should be handled and really is the entire focus of the GDPR being passed in Europe right now to prevent specifically this sort of thing from happening.

Waving 'user consent' around as a reason to do nothing is the wrong approach.

Therefore we must regulate the internet to the tastes of the most paranoid person in existence, because anyone less paranoid is clearly incompetent to give consent?

By granting access to your messages, they are also implicitly gaining access to your friend's messages with you.

All people should have an expectation of privacy that when they send a private message to someone, that someone does not have the right to disclose the conversation.

So you would also like to impose a ban on email forwarding then right? Because that means the person who received your message took it, and exposed it to others that you didn't authorize. I can tell you that at least in the US, legally, your position doesn't hold water.

We're getting a little ridiculous here. There are valid complaints against Facebook in all of this - this just doesn't happen to be one of them.

Email as a protocol explicitly does not come with any guarantees (of privacy, or sender's identity, or receiver's identity, or anything else as far as I can tell).

Facebook makes a lot of noise about privacy (using their own definition, naturally, being 'preventing anyone outside Facebook from seeing your data') so it's reasonable for a nontechnical user to not expect that an acquaintance installing a "what's your pirate name arrr" app should lead to their 'share with friends only' info being acquired and sold by a data broker.

> So you would also like to impose a ban on email forwarding then right?

The difference is that email forwarding is part of the RFCs and IETF standards that define email, e.g.:




What do RFC's have to do with his comment that he has an expectation of privacy when he sends people messages? Regardless, at least under US law, he's way off base.

If two or more parties are communicating via a standardized protocol, then using that standard constitutes an implied agreement between the parties communicating.

Whereas with Facebook, while one or more parties might be bound by some agreement with the company, that doesn't create any clear agreement between users.

This is a pretty weak argument. Using a standard established by an RFC implies a user has read that standard about as much as using Facebook implies a user has read the TOS. The TOS establishes a transitive condition that the user agrees to about as much as they agree to one-to-one commumication under the RFC.

If you have a problem with usage implying consent to a TOS then you can make that argument, but this one is flimsy.

You think people using email know anything about email standards? RFCs?

They see a "forward" button in their email client. Just like Facebook Messenger has a "forward" button on messages.

No one is expecting a different level of privacy in Facebook Messenger over email because of the IETF.

With GDPR, you may get in hot water pretty quick when forwarding thoughtlessly. At least in a corporate setting, you may think twice about it.

Edited to add: Granting for instance access to your contacts fro an app is controversial from a legal perspective in Europe. Data protection agencies at least in Germany have the opinion that you need opt-in consent from everybody in the address book to do that.

When the first person goes to prison for restoring an iCloud backup, I hope we’ll collectively realize how insane that is.

I am not sure if I understand you correctly. I for sure am very uncomfortable about giving my email address and phone number together with my full name and address to third parties without my consent. I regularly get "invites" from LinkedIn. I don't like to be in their database and I never consented to that. LinkedIn surely will get some uncomfortable requests after May, 25th.

> So you would also like to impose a ban on email forwarding then right?

No. I would impose a ban on selling our conversations for money or for services.

Edit: to clarify, I believe users on Facebook were bartering with data that wasn't theirs for access to Facebook's services.


And when I’m done doing that, I should probably check my sanity, because I have to be imagining this thread. It’s absurd that someone thinks that they have an expectation of privacy when they voluntarily choose to send a message to someone else. I know you can’t satisfy everyone all the people all the time, but this idea is just bonkers.

> It’s absurd that someone thinks that they have an expectation of privacy when they voluntarily choose to send a message to someone else.

How does this world view reconcile with the fact that many different states have 2 party consent? Or the fact that considerable money and effort have been spent on concepts like OTR?

People expect others to have some discretion in what private information they pass on to others. Most applications have this assumptions built in mind.

You can easily forward a single email to someone in gmail, but there is not 1-click solution to send all of your emails to someone else.

You can easily forward a single email to someone in gmail, but there is not 1-click solution to send all of your emails to someone else.

That’s actually wrong. Right now, today, you can write apps for the Gmail platform that could have this ability as soon as someone authorizes it [1]. So everyone is down on Facebook, who disabled this ability 4 years ago. What about Google?

[1] https://developers.google.com/gmail/

You are missing the point I am making. People are bartering with Facebook and using your private conversation with your contributions to do it. They ought to have no right to that data for their economic prosperity.

This isn't your nana forwarding chain mail.

It could still be a copyright violation if a third party is scraping/distributing and profiting off of something you wrote. Here's an interesting opinion on how forwarding private messages could be considered copyright infringement: http://nulawreview.org/extralegalrecent/do-not-forward-why-p...

So perhaps it's not an expectation of privacy, but more of an expectation that your private messages won't be distributed by third parties for profit.

People with moral compass (and possibility sanity) such as yours pretty much show that self regulation is poised to fail and support the need for broad legislation.

You crossed into personal attack in this subthread. We ban accounts that do that, so could you please not do that?


Moral compass or not, that is the current state of the law. Sending an email to a second party and then expecting them not to do whatever they want with it is as naive as hanging a billboard and expecting nobody to look at it when I don't want them to.

The world doesn't just give you privacy. If you want any, you have to work at least a little bit for it.

That might be the state of the law for you, but most of us don't live in the US.

It's been years since anyone should have had that expectation.

That doesn't make it ok

That expectation has never existed. If you share something private with someone else, it belongs to them now, and they can forward it to whoever they want regardless of the sharing medium. The system is based on trust, not some arbitrary security measures.

The Canadian Supreme Court disagrees with you (5-2):

> "The court here has taken hold, embraced the challenge of the technology and caused the law to catch up with the technology, in line with Canadians' values," Lonsdale said. "Canadians can expect that their private, one-on-one communications with each other remain ones that they have a reasonable expectation of privacy."


Funny, I feel like there have been a bunch of _revenge porn_ legal cases recently that disagree with your assessment

If a friend of yours authorized access to their messages, then the app would have access to your private messages with that friend.

Only in the same way that apps that you authorize to work through Gmail also have access to your emails with other people. I just don’t see an issue here. The messages permission required explicit, separate consent from other permissions. There is not a single user that could argue that they didn’t understand that specific aspect of this app. This story is just a grab at clicks.

If we have a private conversation, you shouldn't have the right to reproduce the contents of my contributions to the conversation.

You are the one who put it in their phone/inbox in the first place. You can argue about copyright laws or NDAs, but otherwise "private conversation" means nothing.

One-party recordings are illegal in many states. Just because I unicast some information something does not always give you the right to retain a record of it.

Recording laws are for phone calls. How would you even apply them to emails or texts?

Then you better go protest at Google's offices, because Gmail allows apps to interact with your email too.

by using Gmail, you are allowing Google to scrape what might otherwise be private messages

Indeed, and I am going to stop using Gmail as a result of this thread.

not sure if you're being sarcastic, but you really should. Everyone should. It will cost about $5/mo [fastmail] to make it happen, unfortunately most people don't seem to assign much value to privacy.

But was it? "Informed" consent, that is?

I think the new CONSENT Act, by one of the authors of the child-privacy law COPPA, looks quite promising to make these things much clearer for users.


I watched the hearing and Zuckerberg kept evading questions about users "controlling data" responding with nonsense such as "Of course users control the data - they decide what to post on Facebook!"

Completely omitting the issue about what happens to their data once third-party developers have access to it, even in the context of sharing something with friends (and not "the public"), or in this case private messages.

And I agree with others here. Private messages should have never even been as part of a permission. I think now I understand why Facebook recently said that they see Messenger chat as "public". I thought it was just more of a metaphor, but they seem to be thinking of that quite literally. Your private messages on FB are no different than your public profile data to both Facebook and third-party developers.

Oh and by the way, Facebook also recently admitted that most of its 2 billion users' profile data, including email and phone numbers, have been scraped off the website.

> Why is it surprising or even remotely controversial that an app that people explicitly authorized to access their messages (informed consent) then proceeded to access their messages? The app didn't have access to friends' messages, so I don't see what the issue is here, other than yet another clickbait headline.

I'm not sure if we have enough details to say if this was informed consent or not.

But I doubt it was informed consent if the messages were gathered by the Facebook app. Back when I had that app installed, Android did not support granular privacy. One day I got an update notification that said a new update would get the permission to read text messages. I believe the rational was so the app could automatically parse a text message confirmation code. Since I'm savvy, I never installed that update or any other update it ever again. Now it turns out Facebook used that permission to slurp up people's text messages into it profile of them.

This wasn't informed consent: I wasn't honestly told what they would use the permission for, the request wasn't clear (just the standard android notice), and it was an all or nothing ask.

Why is it even possible for an app to request your Facebook messages though?

It isn't...today. Remember, this app was deployed many years ago, prior to the 2014 API neutering.

I didn't think it was. The API docs have always said that the thread endpoint is only available to the developers of the app making the request. It's possible to get the contents of the inbox though.


You used to be able to build an alternative client using Facebook's published API.

I trained my predictive keyboard on Android on my Facebook messages and emails.

I could imagine interesting apps designed to riff on your launguage patterns

There are a lot of valid uses. Why would it be better for your data to be locked up and unexportable?

False equivalence. Exportable doesn't mean acessible to the world.

The commenter didn’t make that “false equivalence.” Their point is that there are valid reasons a user might want to allow an application to access their messages.

For example, reddit’s API allows applications to access messages, which is useful for third party clients.

But someone could pay you for your exported Facebook data (which can include data from friends such as your message history, etc). From my understanding, this is essentially what CA did. CA just obfuscated they were doing this by using the Facebook developer program to automate this process and Mechanical Turk to pay users to give them their data. I agree the Facebook developer program made it really easy to phish for this data.

Context and intent matter.

If someone at a gas station asks me "hey could you give me $5 so I can buy some gas. I'm out of cash." and I give them the money. Then they proceed to fill up a can and go burn down a house. Should I be arrested as an accomplice?

There was no indication that they would do that and if I knew there's no chance I would agree to give them anything.

You'd have to find additional evidence that I consented to that action. In this case, I really doubt anyone knew that their data would be used in this way when all they wanted to do is take a quiz.

Because in the public if you mentioned the possibility you'd be accused of fearmongering.

It is disheartening to see people label it clickbait, this is a very important message that needs to be embedded in the public.


If you keep posting like this we'll ban the account.


What's scary is that Cambridge Analytica is probably only one of hundreds, maybe thousands, of companies that had access to this type of data from Facebook. This is just the one we know about.

Cambridge Analytica didn't pay to access this data. They got users to give it to them for free though Facebook's developer platform.

Actually Cambridge Analytica licensed the data from GSR, according to their press release from earlier this week [1].

> A research company (GSR) licensed the data to us, which they legally obtained via a tool provided by Facebook. Hundreds of data firms have utilized Facebook data in a similar fashion.

[1] https://ca-commercial.com/news/time-facts-not-conjecture-say...

Thanks for the correction, I thought CA paid for access to the data the test collected.

As mwarkentin included, CA _did_ pay users via mechanical turk to install the quiz app. So they did pay the users directly for their data, but not facebook. It seems there is no way to completely stop this without facebook blocking people from exporting their own data. Since CA could just pay people to send them their exported facebook data.

My understanding is the "quiz app" was more of a phishing scheme to get users to share their Facebook data with Cambridge Analytica (including data that user had access to about their friends).

I believe they also paid users a small fee to install the quiz app via Mechanical Turk.

Agreed. I was a DoD contractor when much of the social graph stuff came out. We were all looking at it wondering why they let you access _so much data_. Our managers told us they were working on securing some contractors that would ultimately use Facebook's social graph but nothing ever came of it.

Considering we were a tiny defense contractor who didn't get a ton of contracts, I have to assume this happened in some way or form.

the first thing I checked after this CA debacle was which apps I have connected my FB profile to and are there any fishy apps there. Fortunately, none of such apps are linked. I'd suggest everyone check the apps they've connected and unlink the ones which seem fishy.

The headline is misleading. The article isn't about Cambridge Analytica:

"Because of an editing error, an earlier version of a headline with this article misidentified the entity that improperly harvested the personal information of up to 87 million Facebook users. The information was collected though a quiz app developed by the researcher Aleskandr Kogan, not by the consulting firm Cambridge Analytica."

"It is not clear whether the direct messages were among the data eventually provided to Cambridge Analytica. In an interview on Tuesday, Mr. Kogan told The Times that the private messages were harvested from a limited number of people, likely “a couple thousand,” as part of a separate academic research project and never provided to Cambridge Analytica."

It's all part of the CA story:

"Aleksandr Kogan, a Moldovan-born researcher from Cambridge University, admits harvesting the personal details of 30 million Facebook users via a personality app he developed.

He then passed the data to Cambridge Analytica who assured him this was legal, he said. "

From https://www.theguardian.com/uk-news/2018/mar/21/facebook-row...

Cynical me feels that it's a hell of a coincidence Facebook is acknowledging this data leak just over a month before the GDPR legislation (and fines) drops on 25th May.

This wasn't a leak, it was a feature that existed on Facebook for several years before being removed.

I think you may be on to something. I've often heard guys like Goldman would be just on the right side of certain thresholds for penalties, just as new financial legislation is announced. Certainly makes me suspicious that they get fed information early and react to minimise their financial burden.

yet Zuckerberg is testifying as I write, denying, dodging the responsibility and questions. I can't believe it, instead of laying down and confessing, he is trying to justify and blame everyone but Facebook.

He accepted blame for quite a bit, and apologized (as he usually does)

The part I don't understand about all of this is that, yes somewhere along the line someone made the decision that giving developers access to that data was ok...but they REMOVED access to it as soon as they realized what sharing that data meant. That's how startups are trained to behave, shoot first ask questions later.

They made a mistake and corrected on their own, what more is there to ask? I guess it's more of an issue because of the nature of their business?

> they REMOVED access to it as soon as they realized what sharing that data meant.

Did that unleak the data? No. It was a mistake, but it required a cleanup which wasn't achieved and wasn't made public.

Ed Felten coined the phrase "Exxon Valdez of privacy" over a decade ago in anticipation of this: https://freedom-to-tinker.com/2006/06/12/exxon-valdez-privac...

Wow, that's an amazingly prescient post.

I'd suggest that people popularize the phrase "Exxon Valdez of Privacy" but that may be old enough that people have forgotten it. Maybe substitute "Deepwater Horizon" or "Fukushima". If you want even more prescient, go back fifteen years to see him post a link to https://www.wired.com/2003/01/google-10/

"The company's growth spurt has spawned a host of daunting questions that no data-retrieval system can easily answer. Should Google play ball with repressive foreign governments? Refuse to link users to "hate" sites? Punish marketers who artificially inflate site rankings? Fight the Church of Scientology's attempts to silence critics? And what to do about the cache, Google's archive of previously indexed pages? "

Whenever a colossal disaster happens, there is nearly always someone who was telling anyone who would listen what the problem was years in advance.

Felten is not as famous as Schneier but just as important, and has been doing lots of important work such as fighting against electronic voting systems.

>what more is there to ask?

(1) Make an ernest attempt to use ML, algorithms to identify their customers who are using those leaked datasets Facebook negligently exposed and help devalue the data, instead of eagerly selling them targeted advertising services? I don't know if they did this, but it sure seems doubtful.

(2) Quickly and openly disclose the extent of the leaked data

(3) Stop using manipulative and deliberately opaque TOS to enable ever more data collection

I'm not being sarcastic or insincere, this is my honest opinion of that they could have done. I am continually surprised at how many people making $$$ in ad-tech/PII data mining and brokering seem niave to the fact that this type of behavior would inevitably result in exponential growth of user outrage

> (1) Make an ernest attempt to use ML, algorithms to identify their customers who are using those leaked datasets Facebook negligently exposed and help devalue the data, instead of eagerly selling them targeted advertising services? I don't know if they did this, but it sure seems doubtful.

How could they do that? The cat is out of the bag and FB aren't going to have any knowledge about where that data is now. Have there been reports of it getting out from CA?

> (2) Quickly and openly disclose the extent of the leaked data

I think some caution is a good idea, they don't want to get the numbers wrong - although they are making steps in the right direction with the message to 87 million on their news feeds.

True, it's not easy to do, and maybe it's not feasable to determine who is using the data. I don't know, maybe someone from Facebook will chime in on the issue, or leak some more info about company behavior.

>I think some caution is a good idea, they don't want to get the numbers wrong - although they are making steps in the right direction with the message to 87 million on their news feeds.

Totally agree with the second part, but ~4 years (only divulging the info when forced to during PR damage control mode) is well past being cautious. It's being cautious with the amount of damage the disclosure does to your profits, Equifax doesn't even wait that long.

I guess it depends on how you define "responsibility". On the one hand, he was the CEO, so he's responsible for everything. On the other hand, it doesn't look like any of this was deliberate - more like negligent. The question is whether or not the lack of attention to privacy crosses the line into _gross_ negligence or not.

Why would he confess? This is a post-truth post-consequences world.

He has paid his dues to congress. What else do you expect?

Facebook has already updated the numbers from from 47 million to 80 something million. This is going going to go bigger than Yahoo scale: "all accounts and all data" pretty quickly

The 84 million number is just for Cambridge Analytica.

The real number is north of 2 billion.

"Malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery."

"Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way."


The data accessed here was from a smaller unrelated research study. The only thing it has in common with the larger CA leak is that it was done by the same researcher.

The article seems to spin it as being part of the CA study, apart from the small commentary towards the end. Has anyone seen the text of the message FB has shown to the affected 87 million and does it explicitly mention private messages?

That's a pretty significant aspect. I'm sure he stole plenty more data in other unrelated studies

This data wasn't stolen.

Just borrowed and used inappropriately. Not "stolen"

Private messages? There is ZERO excuse for such leak. It is either utterly incompetence or intentionally malicious. Facebook needs to be hold accountable, period.

By leak you mean people giving consent when presented with the list of permissions the app / quiz requested?

I think the outrageous part is that app developers could get data of their users' friends - those are the people that never used the app or agreed to its permissions.

It's amazing that FB is up 4.5 percent.

He's not under oath, [from what I've read] the majority of the committee members have received campaign donations from FB, and some committee members own FB stock. In other words, the market knows that this is a charade.

Watching pretty much every hearing for last 8 years, it was shocking to me how nice, friendly and understanding they were. Other than Cruz, they all turned from daily wolves to sheep.

Either it is campaign contribution or perhaps without understanding technology, most are afraid what Mark can learn about them running queries on their personal messengers.

Never ever seen them so nice, including questions if he wants a break now to which crowd laughed; they treated him like he is 8 years old. Incredible! No wonder he shook his head smiling for more questions!

It makes sense. Investors perceive that based on today's hearings Facebook will continue to generate piles of cash. This is most likely correct.

Around 2011+ we saw not only quiz apps but also offerings such as "See who views your profile" that would result in an OAuth authorisation. How long were those authorisations active before being revoked? How much data was exfiltrated, then and since, and to whom?

If it wasn't for recent changes to authorisations being suspended after a period of time these tokens could be seemingly worth something to the right person.

The root problem being, average users don't know what they're giving access to and know why its important to be critical of such access.

How was this possible? I don't recall ever having access to private messages in the FB API. Was this ever available to developers?

Apparently, v1.0 of the Facebook Graph API could access users' private messages via the 'read_mailbox' API request [1]. This was deprecated when v2.0 launched.

"Version 1.0 of the Graph API launched on April 21, 2010. It was deprecated in April 2014 and closed completely to legacy apps (ie, existing apps that used the API before April 2014) on April 30, 2015."

[1] https://medium.com/tow-center/the-graph-api-key-points-in-th...

But why? Why would anyone set up an API access to PRIVATE messages. That's crazy :o

On one hand it's actually a fairly reasonable API. Imagine using third-party AIM clients a decade or more ago. Same kind of thing.

They never provided ability to send messages. This is a useless thing for AIM clients.

You used to be able to connect to facebook messenger via XMPP. Combined with this permission, it would have let you retrieve historical messages and add persistence among alternative clients.


Actually they did. I used the old OSX iChat to message with friends on FB.

Other third party applications had that ability as well.

Facebook's agenda at that point was get as many developers onto their platform by enticing them with all this access to "data people gave away".

If Adium, Pigeon, or another collective chat application needed that permission at the time to combine my facebook and AIM lists into a single application, i would have certainly understood it.

In late 2013 I used (I think) the Graph API to pull my own private chat messages so I could see what messages I sent and received on a particular day, to remind myself when certain events happened. I can't think of many good reasons for this API to exist to third parties, but it was pretty handy for my own data.

A third party messaging client would need it

“Private messages” as in those sent via Facebook Messenger, right? Not Secret Conversations, Facebook’s allegedly e2e messaging product [1]?

[1] https://www.facebook.com/help/messenger-app/1084673321594605...

I don't think Facebook's e2e messenger existed at the time this took place.

Yes - I don’t think “Secret Conversations” existed when the data was harvested from the Quiz App.

Title should be: "Whoops, looks like you gave that one quiz app all your private messages too, you idiot!"

Unfortunately for startups, I think that the only solution is to make it too expensive and complicated for the businesses to collect any unnecessary data. Like dealing with credit cards, no devs in their right mind would want to process CCs by themselves because it's such a huge risk and pain in the ass to be fully PCI compliant. We all outsource that part of the work to 3rd party processors because of the regulations. Why not do the same for the personal data, make it a very high risk for the business to process it directly and the majority will avoid dealing with anything that is not absolutely needed by their business model.

The alternative is open source and decentralised: https://diasporafoundation.org

Do they mean "wall posts" or "messages"? Did Facebook ever produce an API that let any developer see private Messenger messages?

When it comes to "social media" I'm not easily stumped, but I had to do a double take at that headline.

A neutral discussion on surveillance and adtech cannot happen here because of the makeup of the audience.

It's undeniable too many people from the SV tech community are deeply vested in the business either as workers of Google, Facebook or part of the large multi billion dollar ad ecosystem.

The fact is tech people made pompous claims about liberty and freedom and then sold out the moment they had personal gain. This is not their fault, the history of ethical posturing consistently plays out this way and capitalism incentives leaves little room for ethics making regulation the only workable option.

The default reaction is thus to minimize accountability, blame others, apologism, hand waving or deny the issue altogether. Every discussion gets mired in the same basic first principles of freedom and privacy.

If you can't behave ethically you have no basis to expect ethical behavior from others, or for an ethical society. You make the bed, you lie in it.

I am afraid the details and extent of breach may never come out

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact