Hacker News new | comments | show | ask | jobs | submit login
Zuckerberg on Cambridge Analytica situation (facebook.com)
854 points by runesoerensen 5 months ago | hide | past | web | favorite | 544 comments



The trusting developers not to sell any data but putting zero safeguards in place to prevent this and extremely punitive repercussions despite being repeatedly told by the public, media, and even high level employees tells me Facebook can't plead ignorance to this and they not only knew this was happening, but they probably intended for it to happen. They knew it was illegal but put all the incentives for companies not to follow the rules. That's the only hole in his statement.

As for the rest of it, it's progress. It seems like a lot of good changes, but Analytically Facebook execs probabaly summarized that this is the least they had to do to stave off regulations or monopoly anti trust from congress. Any less, regulations would still be placed on them so it's brilliant strategy to do this and frame it in a way that Facebook is concerned about all the damage and we voluntarily do this for you instead of the truth which was we knew about this forever and only are doing it because of threat of regulations.

Overall, an optimal outcome for all parties currently, except for society as a whole down the line


They knew it was illegal

What, exactly, are you claiming about this that was “illegal”? When you singup for Facebook, you agree that anything you post might be shared with others on the platform.

The Facebook developer platform became so limited in 2014 that most developers (including me) left. There was no point in developing apps for the social graph that had no ability to be use the social graph. But even prior to that, the sharing of this information with apps, even those authorized by a friend, wasn’t “illegal”. You agreed to it when you signed up for Facebook and voluntarily handed them your information. Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here.

So what about this whole situation is "illegal"?


If this was about tracking and informed consent, then the illegal ity is there https://amp.theguardian.com/technology/2015/mar/31/facebook-...

There was no consent because tracking was not opt-in but opt out.

More recently regarding user personal details and lack of consent, again deemed illegal in Germany https://amp.theguardian.com/technology/2018/feb/12/facebook-...

The pattern I see is that europe considers facebook's methods to commoditize users' data to be unreasonable and will be regulated.


That's a different issue, though, and unrelated to this thread.


It is relevant and related to their track record regarding how they treat user privacy, what data they have previously released to third parties, and what their current legal standing is with regards to the same issue, especially in light of Cambridge Analytica.


It's not, I recommend you to create your company account in Facebook and be horrified on the amount of information that you can collect from anyone.


I was addressing the comment "they knew it was illegal" and the response "what .. was illegal?".

The two links suggest awareness by Facebook of the illegality.

So, where is the irrelevance of the two links with regards to the privacy concerns highlighted by this whole story?


>There was no consent

Voluntarily putting your info on a free site is the consent.

Would you also cry "illegal" if HN sold your post history, which they have tied to your IP address, which they can easily link to who you are?


A bit of a nitpick, but the Hackernews post history and comments are publicly available. Most of, if not all, privacy law treats public data differently.

Now the majority of Facebook data, which is visible only to a selected group of individuals, can not contradict privacy laws of a country. Despite whatever agreement you signed. Law trumps user agreements.

As an extreme example I can sign that I give you the right to kill me, but if you do that you have still committed pre-meditated homicide. So it does not matter what Facebook made you sign it is likely they committed (and still do) a crime in most European countries. For the U.S. the waters appear extremely blurry from what I understand... But I can not offer an opinion. If there is a lawyer/ privacy expert to pitch in this discussion that would be nice.


> I can sign that I give you the right to kill me, but if you do that you have still committed pre-meditated homicide

This sounds as if nobody can agree to voluntarily take part in a potentially fatal experiment, like testing a potent medicine or be a test pilot. I believe legal provisions exist here.


That depends on the law in each individual legal region (most probably country). For example, I believe Euthanasia is legal in some European countries.

Even the opposite can be true. Life saving medication may be illegal in one country but another provides it without restriction.


Kind of sums up my reply. Generally, if you know or there is enough suspicion the cure is going to harm or kill one and euthanasia is out of the picture then you are liable. That's why the whole process exists about medical trials. One nice window here is that you can in general experiment on yourself (people have and are doing that). Depends on the framework, but good point.

Anyways, my point here is you can not override law (or you shouldn't be able to).


Absolutety.


At least in the UK you can’t sign away your health and safety rights. Even if the job is dangerous the employer still has a duty of care.

Having said this there are some job roles that are excluded from specific parts of the legislation due to their unique nature (e.g. the Army)


If it mattered to me, was necessary, and i was in the right legally, then I would need to.

Each country has its own laws covering this situation.

Giving someone personal information still restricts them legally irrespective of what they think i've consented to or their own definition of consent. The legal system has its own opinion.


It's not clear to me how much, if any, of the work was done in the United Kingdom or by British companies.

If the are companies in the UK, or people working in the UK, the sharing or retention of data may have been illegal under British law.

https://en.wikipedia.org/wiki/Data_Protection_Act_1998



One day, the whole farce that is "you agreed to the terms and conditions" deserves to just die.

Almost no one reads them, so they should not be enforceable.

I mean, as developers we know when a session is established, pages are visited, and can easily see how long they've been on the page.

No one can read the typical T&C in 10 seconds .. let alone 1 minute .. especially without even opening the page! So the options should be something like:

[x] I don't care, just whatever dude.

[ ] No. Get me out of here. Because I don't know how to close the browser window myself.

EDIT: Found this as a way of proof http://www.pcpitstop.com/spycheck/eula.asp


>Almost no one reads them, so they should not be enforceable.

It varies depending on where in the world you are but its actually pretty unclear as to whether they are enforceable. Or at least, a specific set of T&Cs with a specific user/customer may be found to be unenforceable for a wide range of reasons.

Of course, the company that puts the T&Cs in front of you isn't going to tell you that.


> Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here

Perhaps not in the US, but I'd like to point out that it's not the same way everywhere: I think the EU is moving towards another direction. There's a whole thing around whether a company has a responsibility to do due diligence around preserving the personal information of its users. Just because you gave them your data doesn't always mean that they can now do whatever they want with it (e.g. give access to detailed information in large amounts to third parties). Even if you sign an agreement, in many jurisdictions there are certain rights a company can't just make you sign away.


>When you singup for Facebook, you agree that anything you post might be shared with others on the platform.

There's no consent on behalf of the friends if one shares information about their friends to an app. Consent is given directly


Doesn't matter what you signed and how it relates to civil law, you cannot sign away your statutory rights.

What matters is how criminal law views this particular data collection in the context of Facebook's working relationship with this particular client, within all of the various jurisdictions that Facebook operates.

The article "What Colour Are Your Bits", is a pretty good look at this - http://ansuz.sooke.bc.ca/entry/23


Possibly referring to a violation of their 2011 consent degree with the FTC.


>When you singup for Facebook, you agree that anything you post might be shared with others on the platform.

Except you CAN'T sign away your rights in many jurisdictions, including this thing called HIPAA. So if Facebook sold people's mentions of health problems to third-parties... is that a violation?

I mean, this is the exact scenario people grilled Windows 10's spyware. But somehow, Facebook doing it, isn't an issue? What's the difference? I'm honestly asking.


Businesses accusing each other of illegality. How rich.


Yeah this is one of the worst part about this whole situation. Everyone screaming ILLEGAL!! and also somehow acting like Facebook is the only one doing this?

Almost EVERY free site is trying their damn best to collect as much info and link everything together to sell it so they can- you know.. make money.

I guess those people are instead willing to pay for virtually every site they use on the internet, right? Right? crickets


Yeah why are police going after that one murderer when there are so many out there that they havent caught yet??


it's not a binary equation though, if data collection were illegal, content and service providers would be much more likely to try other solutions


So FB opened up their platform to 3rd party Devs in 2007 and this CA incident happened in 2013. FB changed their policy of not allowing broad data access to these Devs in 2014. So my question is: Why they admit that they'd audit the pre-2014 apps now when NYT and Guardian/Observer broke the news? And what policy is in place now that makes sure that these 3rd party Devs won't sell whatever info they do collect as per post-2014 policy? I am not satisfied with what Mark said just now. We need more answers.


They probably assumed that if devs couldn't access the data anymore it wouldn't be a problem in the future.

> We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data.

Sounds like they'll make the developer agreement legally binding so they can take legal action for violating the ToS.


I kind of doubt that this action will stop all of the devs that check in their api keys into public repos everyday that are then used by other parties, but at least it will allow facebook to cover their ass a bit more.


[flagged]


> when Obama committed even worse Facebook privacy violations back in 2008/2012

Obama's campaign said they were scraping for academic purposes, downloaded a bunch of data and then lied about deleting it? There are multiple levels of B.S. when it comes to Cambridge Analytica and Facebook. Only one finds comparison in the 2008 or 2012 races (on either side).


You're framing this improperly. Prior to 2014, anybody could create an app, and if people authorized it to, it could access some friend data. Your comment reads as if they went and made some special arrangement with Facebook and lied to them saying it was for "academic purposes". That just isn't the case. Every single Facebook user with the ability to write code could do this easily and automatically for the 7 years leading up to the change in 2014.

What Facebook did with the Obama campaign was far worse. No authorization was required - they gave them free, full access to the entire social graph. Didn't like Obama and didn't authorize an app to access your and your friends' info? Too bad, your information was still given to them and used to help get him elected. That wasn't a problem for anyone in the press though, because of course, Obama was a Democrat.

edit: I misspoke. There was an app required, but the Obama campaign accessed 190 MILLION profiles, versus 50 million involved in CA. According to [1]:

"The campaign boasted that more than a million people downloaded the app, which, given an average friend-list size of 190, means that as many as 190 million had at least some of their Facebook data vacuumed up by the Obama campaign — without their knowledge or consent...A former [Obama] campaign director, Carol Davidsen, tweeted that Facebook "was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing.""

[1] https://www.investors.com/politics/editorials/facebook-data-...


> According to a Time magazine account just after Obama won re-election, "the team blitzed the supporters who had signed up for the app with requests to share specific online content with specific friends simply by clicking a button."

you are doing the old classic false equivalence... the deliberate use of an obama sanctioned facebook app that asks you to press a button to share the get out to vote (probably for obama) with their friends is 100% different than Cambridge University creating psyche profiles on people to ostensibly use as a fun way to find out your profile. then to sell that harvested data to a different campaigning firm Cambridge Analytica to use to micro target ads to them for political purposes unbeknownst to them is way different. It's about expectations: one expects it to be used in a political fashion because they opted into it and the other is not expecting it to be used to target themselves and their friends later.


The Obama app accessed the data of roughly 189 million friend profiles that didn't authorize the app (it only had about 1 million installs). About 95 million of those people would have consciously objected to helping the Obama campaign had they known about it. But all of their data was collected and used by the Obama campaign to target political ads and formulate campaign strategy anyway, even though none of those 189 million people expected or specifically authorized their data "to be used in political fashion".

So how is that a false equivalence?


> That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists.

permission for one.

> The campaign called this effort targeted sharing. And in those final weeks of the campaign, the team blitzed the supporters who had signed up for the app with requests to share specific online content with specific friends simply by clicking a button. More than 600,000 supporters followed through with more than 5 million contacts, asking their friends to register to vote, give money, vote or look at a video designed to change their mind.

intent and deliberate action up front by the user of the facebook app for two not some shady psych profile that gets sold later and used behind the scenes to target them without any link to the campaign. that's absolutely a false equivalence.

Also, that number 189 million is absolutely likely to be way too high and is likely bullshit. Say I have 190 friends with a significant overlap with say 50 friends who all know each other and are likely also friends with each other. it simply doesn't follow that every single friend has 190 unique friends to get a graph of that size even though the average across all users might be 190 friends. have a few more of those for different groups (business, whatever) and you have a much lower number. coupled with the fact that you have no idea what the average number of friends the people that opt into that app have if you are just comparing it to the overall average number of friends for the entire set of facebook users you possibly have a lower number still.


permission for one.

Which were the same permissions that the users of the Kogan app gave. Remember that the issue here revolves around the fact that friends of the users of either app never gave permission for their data to be used. The only difference is that Obama did this to about 4X as many people.

Also, that number 189 million is absolutely likely to be way too high and is likely bullshit

According to their own campaign manager, they obtained the entire US social graph, and Facebook knew about it and allowed it to happen. The 190 million number may actually be low.


> Which were the same permissions that the users of the Kogan app gave. Remember that the issue here revolves around the fact that friends of the users of either app never gave permission for their data to be used. The only difference is that Obama did this to about 4X as many people.

you're still wrong but at this point, with all the evidence showing just how different the situation is, i suspect you want to be for whatever reason.


What exactly am I wrong about?


It's an agency issue. If I use an attributed channel it's like meeting you and saying "hey remember your friend the CS guy? Our company is looking for someone like that. Could you pass the word along?" vs pretending to be chatting with you when all I'm looking for is relevant recruiting leads which I will then use unbeknownst to you.

For hiring this isn't as touchy a subject, but surely there's a qualitative difference in these two interactions. One is based in an above-biard interaction. The other in subterfuge and hidden motives.


The end result for the friends, which comprise 99.5% of the victims in both the Obama and Kogan cases, is precisely the same. Nothing was represented to them, their data was just used because someone they were friends with installed the app.



You're missing the point. The OFA app determined who should share what with who by running all the extracted profile data (including that taken from opted-in users' oblivious, perhaps anti-Obama friends) through their psychographic models. It's identical, except, for "the good guys".


> You're missing the point.

I think you are the one missing the point.

> The OFA app determined who should share what with who by running all the extracted profile data (including that taken from opted-in users' oblivious, perhaps anti-Obama friends) through their psychographic models.

Yes, this was a political app, downloaded by people motivated to get obama to be the president. They shared that info willingly to the obama campaign. The obama campaign used that information to suggest "hey man, we need help in texas and you know someone that might be able to help us. Can you do us a solid and send them some of this info?" Nothing I have seen suggests that they did anything untoward with the information voluntarily and directly shared with the campaign. They used it to suggest people to share the message to. And they did any action with the direct permission of the people using the political app.

> It's identical, except, for "the good guys".

Not even a little bit. The Koger fellow used the data he harvested from those psych profiles and also got the friends information that the people taking those "tests" doubtfully wanted them to have. If I take a silly test that is a facebook app I would not expect them to datamine my friends to later sell that information off to a political campaigning firm to later use for a political campaign that they may not have wanted it to have. If trump had used an app that did the same as obama I would have had no issue with it. he didn't, and I do.


Again, you're missing the point. The people who SIGNED UP for the Cambridge Analytica app, or the Obama For America app agreed to share their own information with the app. In doing so, they ALSO agreed to share data on their entire friends list with the app -- Facebook had no restrictions in place on this at the time.

Those people who were opted-in by proxy, i.e. the friends you sold out, may not have wanted the Obama campaign or Cambridge Analytica to get that info, but they never had a choice!

Both apps (well, OFA for sure, CA allegedly) took data legitimately provided to them, used it to feed predictive models, and then actioned marketing around exploiting those learnings.

>We released this tool for the Obama Campaign in August 2012. Over the next 3 months, we had over a million supporters authorize our app, giving us potential access to over 200 million unique people in their social network. For each campaign activity (persuasion, volunteering, voter registration, get-out-the-vote), we ran predictive models on this friend network to determine who to target. We then constructed an “influence graph” that matched existing supporters with potential targets. For each supporter, we provided a customized list of key friends with whom to share different types of content. [0]

Literally the ONLY difference, other than the political leanings, is Koger's app was then "acquired" by CA, in breach of Facebook's TOS. Which again, is something that happened probably ALL THE TIME in the pre-2014 wild west of Facebook app mining.

[0] http://www.rayidghani.com/what-can-non-profits-learn-from-so...


The point you are missing is that there was no cambridge analytica app and nobody opted into it an app that doesn't and didn't exist. They opted into a completely different app that datamined it for completely unrelated purposes and then, much later, sold off to CA which used it to target people. one was on the up and up and one was CA. Again, if trump did what obama did on the level it would have been fair game. they were shady and used shady tactics and shadily acquired data. Not a single person that opted into the koger app thinking it was a political app but people that used the obama app knew what they were getting into.


The point you're missing is most affected users didn't download anything, and were simply friends of someone who did. (OFA or Krogers app) The distinction you make of OFA being on the up and up and CA not is valid, but doesn't mean suddenly OFA is on a completely different level. There still was a massive amount of data sucked up without consent.


> The distinction you make of OFA being on the up and up and CA not is valid

bullshit. plain and simple. you are falsely making those two equivalent.


If my Facebook friend installed the Obama app, their campaign had access to and collected my information.

Do I have a right to be upset about this or not?


You're completely wrong about this:

https://twitter.com/mbsimon/status/975231597183229953

"I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work."


See above quote from their own campaign manager. Here's a link to her own tweet on this issue if you don't believe me or the publication I linked to above:

https://twitter.com/cld276/status/975568130117459975


I'm not taking sides here, and I could be very wrong – my assumption here is that Cambridge didn't have a "Trump Campaign App" they had some other app, then sold that data to a third party. Obama's campaign had a "campaign" app, and used the data collected from that app.

We can all argue about whether facebook should even allow apps the kind of data they do, but the crux of what CA did was to re-sell data they collected from Facebook, right? This is where the two aren't the same, as far as I'm aware, but like I said, I'm not super in the know.


the crux of what CA did was to re-sell data they collected from Facebook, right?

No. An independent developer had an app many years ago, recorded data from that app, and then sold that data to CA years after the fact. That developer did in fact violate Facebook's developer platform policies by selling the data, but CA had nothing to do with the app or what was represented to the people using it when they installed it.


Something is off here - not in the discussion but on the larger picture.

Why didn’t Hillary win? I’m sure she had access to similar tools?

I think someone needs to take a look at the Hillary campaign and contrast it to identify exactly how effective Facebook data is


Didn't CA report to fb that the data was deleted when in fact it was not?


koger did report that but ended up not deleting it and also ended up selling the data to CA.


Are you playing mental gymnastics here? CA bought the data. CA had nothing to do with the app. They did, they bought the data. I get that they aren’t responsible for the person violating facebook’s rules. I think we already established Facebook is the boogeyman here. But we were comparing the Obama campaign app to an app unrelated to a campaign. In this thread there was the implication “obama did the same thing” and I think we can hopefully agree that Obama’s campaign didn’t sell their data in violation of the fb tos as far as we know and the owner of the app who sold to CA did. Sorry I got the name mixed up. Yes CA didn’t sell. But neither did the Obama campaign. So there is not an equivalence.


Of course there's an equivalence. The exact same thing happened - Obama's campaign received the data of more than 100 million people - about 99.5% of whom didn't explicitly authorize them to receive it. They then used that data in violation of Facebook's Developer TOS. Specifically, the Developer TOS say that you aren't supposed to use the data you receive from the API for any purpose other than the functionality of your app. For example, taking that information and analyzing it to produce campaign strategies and/or using it for targeted political advertising is and was against the Facebook TOS.

How is that not exactly the same thing, just done on a much larger scale by Obama? I get it, this is mostly a lefty crowd here on HN. But I just can't stand hypocrisy on either side. The fact that hundreds of commenters in here want to defend one guy and castigate another for doing the exact same thing based solely on the political affiliation of each is disgusting to me.


I can't really disagree with anything you've said here, but the only thing I can say is that your comments might not be received well because they are very firm – "the exact same thing" is a pretty assertive statement that I think several here disagree with, on merit or not, and being a bit more flexible in communicating this (which, by the way, thanks!) might help others receive it better, not that it's your job to do that or anything, just sharing my pov.


> No authorization was required - they gave them free, full access to the entire social graph

Source? All I can find is a scheme where they prompted people to sign in to a campaign site and grabbed the friend data that way. Icky as hell, but it's not obvious why that's "even worse" than CA.


It does seem like the Obama campaigns got preferential access to Facebook data. Data for even more people than CA got. And perhaps the Clinton campaign did as well, or at least got access to the older data.

What distinguishes CA and the Trump campaign is how well the data was used. CA staff apparently just did a better job at using the data to manipulate people. Their canvassing app. Using bots on Facebook and Twitter. Plus entrepreneurs from Eastern Europe or whatever. And maybe they went further, discouraging potential Clinton and Sanders supporters fron voting.

But anyway, the key point isn't whether CA or Obama/Clinton got more data from Facebook, or whether Facebook willingly helped Obama/Clinton. The key point is how well voters can now be manipulated. It's another level in the problem that money buys political power. And it's arguably how AI could do the same.


There's a lot of bad information in this thread.

https://twitter.com/mbsimon/status/975231597183229953

"I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work."


Here's Obama's 2012 campaign manager discussing this. They did it different four years later, and Facebook had four more years of data and tens of millions more users at that point.

https://www.youtube.com/watch?v=mZmcyHpG31A


It's actually not such bad info [1] (from Carol Davidsen, Obama's former campaign manager). The guy you quote certainly has the opinion that it was OK to access and use the data of 190 MILLION people for political purposes, only 1 million of whom explicitly authorized him to do so, but I'm guessing that the majority of those 190 million people would vehemently disagree with him.

https://twitter.com/cld276/status/975568130117459975


CA staff apparently just did a better job at using the data to manipulate people.

I've seen no hard data showing that any of this did Trump any good. Apparently not a single study has been done as to whether people either failed to vote or changed their vote based upon fake news or the use of this data.


I haven't either. My opinion is based on my own anecdotal observations, and news coverage, both of which may be biased.

I do follow fringes of online anarchism and anomie, and I was surprised to see so much support for Trump, when the polls were showing him so far down. And much of that support was so odd that I thought it ironic. But whatever.

I'd like to see such a study, for sure. But I'm not optimistic. Arguably, many who voted for Trump would be no more forthcoming to researchers than they were to those doing the election polling. There's just too much polarization, I suspect.


For starters, they accessed 4X the number of profiles that CA did. I have edited my comment, however, because it appears that they did (just like CA) have an app that they used. Except they used it, as one Obama campaign manager put it, to "suck out the whole social graph" of voters in the US, and Facebook "was surprised, but didn't stop [them] once they realized they were doing it".

By the way, this is something that would have been stopped on any other app long before they ever accessed 190 million profiles. So Facebook essentially gave them the entire US social graph, that they wouldn't have allowed any other app with only 1 million users to have.


> Obama's campaign said they were scraping for academic purposes, downloaded a bunch of data and then lied about deleting it?

Fair point. But wondering if anyone asked them to delete it, and then went about verifying it later? I am guessing nobody bothered too hard because as the person in charge of the strategy said "Facebook were on our side"

https://twitter.com/cld276/status/975568208886484997

---

They [Facebook] were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.

---


They didn't tell Facebook what they were doing and then, when they took way more than Facebook was expecting, Facebook was okay with it because of politics or being too afraid to oppose it. Then the all political opposition was powerless to complete with that efficiency of targeting and had to pull off a heist to even the playing field before the politics of social media company executives became the only relevant politics.

It's not like academics are any morally superior to anyone else anyway so I don't think Facebook should be handing that stuff out to anyone.


> They didn't tell Facebook what they were doing

Source?


https://www.investors.com/politics/editorials/facebook-data-...

"Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing." - Carol Davidsen, Former Obama Campaign Director


> Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing

The extraction of Facebook's social graph is the element I agree "finds comparison in the 2008 or 2012 races" [1]. But Obama's campaign didn't mis-represent its identity nor intentions to Facebook. Kogan and Cambridge Analytica did. Obama's campaign didn't lie about deleting its data when asked to (which, to my understanding, it wasn't). Kogan and Cambridge Analytica did. Moreover, Obama's campaign reported its backers to the FEC; Kogan and Cambridge Analytica do not.

[1] https://news.ycombinator.com/item?id=16642536


CA didn't represent anything to Facebook. They never created an app. They bought data, years later, from Kogan, who had created an app. Kogan violated Facebook policies, but there again, he didn't "misrepresent his identity [or] intentions" to Facebook. You didn't (and still don't) have to represent any intentions to Facebook to create an app. You simply create the app and agree to the platform terms and conditions. You're framing this as if CA & Kogan created some kind of special relationship with Facebook under the guise of academic research, and that just isn't the case. The only app in this situation that was granted special permissions or where Facebook knew the true intentions was the Obama app, and it's arguably worse that Facebook knew what their intentions were. That's because they knew that about half of the users involved - 95 million people out of 190 million - would never have knowingly allowed their data to be used to help Obama, but they allowed it special access anyway.

As for Obama's campaign not being asked to delete data, all apps were required to delete data that they came into possession of under the pre-2014 policy. Those were the terms they agreed to when they deployed the app - especially one that was allowed special access to the entire US social graph (where no other app would have been). So if Kogan violated the policy, so has the Obama campaign.


The "false intentions" here are about the user agreeing to use the app.

Obama: "Give us a list of your Facebook friends, and we'll help you contact them about voting for Obama." While the Obama campaign did collect data about friends, this data was voluntarily given to them by users. If the Obama campaign had your Facebook data, it was because one of your friends knowingly and voluntarily gave it to the Obama campaign.

Kogan/CA: "Take a free personality quiz!" The people taking the quiz likely had no idea that they were supporting a political campaign or helping Trump win. You could have your data given to CA even if nobody in your friend graph supported it.

Also, that 190 million figure is probably inaccurate, because there is probably significant overlap in people's friends lists.


this data was voluntarily given to them by users

It was given to them by the 1 million users that authorized the app. Not the other 189 million (or more) users.

Also, that 190 million figure is probably inaccurate, because there is probably significant overlap in people's friends lists.

According to their own campaign manager, they "sucked out the whole of the social graph" with Facebook's blessings. So they may have actually had more than 190 million, since there are more US users that on Facebook.


> I don't know, they should have done it when Obama committed even worse Facebook privacy violations back in 2008/2012.

Citation needed. Everything I've seen indicates Obama's campaign didn't violate the rules: https://twitter.com/mbsimon/status/975231597183229953

"I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work."


Citation needed.

Sure....coming right up! How about a statement from Obama's former campaign director saying that they "sucked out the whole of the social graph" with Facebook's blessings?

https://twitter.com/cld276/status/975568130117459975


...

That doesn't prove what you are saying. "sucked out the whole of the social graph" is a non-technical statement that doesn't have an explicit meaning, nor is it specified what user data they were scraping.


Look later in the thread. The estimate is that Obama took data from about 190 million profiles, about 189 million of which gave his campaign no explicit access to their data, and about half of whom would have explicitly objected to Obama’s use of their data had they known about it.

You might think “well that’s egregious, but the rules at the time allowed it”. Unfortunately, even that isn’t true. Obama’s app did not play by the rules. The Facebook developer platform rules at the time stated, essentially, that you could not use the data you gained access to through your apps for any purpose outside of the operation of your apps. In other words, you weren’t allowed to create an app that claims it just sends a message to your friends about how wonderful Obama is, and then take the friend data you gain access to through that app and use it for the targeting of political advertising and/or campaign strategy, which is by all accounts precisely what they did. So they broke the same rules that Kogan did, just on a much larger scale, and with Facebook’s tacit approval.


Pulling user data(and friend data) from the API was "fair game" pre 2014. It was an explicit permission you could ask for.

Gleaming insights from that data and then using that data for targeting on FB is/was the point.

Creating an app that collected the data under false pretenses, transferring that data to another 3rd party and then claiming you never had that data makes it a different situation.


Pullling the data that your app needed for its stated functionality, exclusively for use within your app, was “fair game”. Pulling anything beyond that, and/or using it outside your app, was absolutely not allowed, ever, on the platform, even at the very beginning in 2007. Allowing it to be analyzed for targeting was never allowed, nor was it the point of the platform. You’re simply wrong.

How do I know this? I developed apps on the platform from 2007-2014 and was always reading the rules and any changes they issued because I didn’t want to violate them and have my app banned. They were exceedingly clear on this issue. Sadly, Obama’s app was allowed to violate these rules and received no ban.


I'd have to dig through the TOS again to confirm my memory on some of this stuff. You definitely could create a custom audience and target your fb app users based on UID.

Unless I'm reading this interface totally wrong

http://www.jonloomer.com/wp-content/uploads/2014/09/create-f...


You used to be able to target ads based on user ID. That option was closed off a few years ago - I was actually awarded a $2,000 Facebook Bug Bounty for spotting a vulnerability that allowed that option even after they shut it off.

But regardless, it was still a developer TOS violation to export user IDs of people that hadn’t personally authorized your app and use those IDs in any custom audience even back when Obama did it. In other words, you weren’t supposed to grab user IDs of friends of your users and use those in custom audiences. In fact at one point, in an attempt to enforce this policy, Facebook stopped returning friend user IDs, and instead gave proxy user IDs that were meaningful only within the API, but couldn’t be used for custom audience targeting. Then they got rid of the target by ID option altogether.


That was different than what they did in 2012 - which was a facebook hoover.


> I don't know, they should have done it when Obama committed even worse Facebook privacy violations back in 2008/2012.

I know a lot of folks have been desperate to make a false equivalence with the Obama campaign's social media use, but it doesn't pass the sniff test:

https://washingtonmonthly.com/2018/03/21/no-obama-didnt-empl...


I hope you're kidding. Not sure what the rule on repeating myself is but since this is currently the top comment in this chain, I'll post it here anyway:

https://www.investors.com/politics/editorials/facebook-data-....

"Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing." - Carol Davidsen, Former Obama Campaign Director

They used the exact same technique to access data from 190 million profiles - and about 189 million of those were without explicit authorization. Exactly the same technique, just with 4X the reach that CA had.


I'm not sure why you keep on repeating that line, it's just a statement of fact. There were hundreds (if not thousands) of FB apps from that time period that pulled comparable social graphs before FB switched to the v2 Graph API in 2015. What CA did beyond this is that (1) they collected their political data under the guise of a personality test, without any indication to users what their data would be used for, and (2) the data was used for political microtargeting (in violation of FB's TOS) instead of just sending voluntary messages to friends.

If you know of any evidence that Obama's campaign broke either the law or FB's ToS, please let me know.


There were hundreds (if not thousands) of FB apps from that time period that pulled comparable social graphs before FB switched to the v2 Graph API

That is simply not true. Facebook rate-limited apps for exactly this reason. Additionally, although it was technically possible, the TOS did not allow mass-scraping of friend data for any purpose, much less political purposes. Facebook monitored and routinely banned apps long before they ever accessed the data of even a few million friend profiles, let alone 190 million of them. That's why, according to Obama's own campaign manager, Facebook was "surprised" but then decided not to ban them - which they would have done to any other app.

the data was used for political microtargeting (in violation of FB's TOS)

Which is precisely what Obama did.

If you know of any evidence that Obama's campaign broke either the law or FB's ToS, please let me know.

See above.


I think the point neuronexmachina was trying to make about microtargeting is that Obama simply asked their users to send messages to their friends, whereas CA directly advertised to people.


The Obama campaign used that data in every way that CA did. The fact that the app only represented that it was sending messages to friends is just as misleading as what Kogan (not CA, who never had an app) did. They used this data that 99.5% of the people didn't authorize them to have for targeting, campaign strategy, etc.


Again, collecting data about friends while being upfront about the purpose is very different from collecting friends data with completely false pretenses, selling the data(Alex), and then later lying to Facebook that it was deleted when it was not(CA).


You keep stating the same allegations and assumptions, and now including statistics, without sources.


But this then suggests that we should look far more closely at the Hillary campaign and see why it didn’t win.

Was it bad management ?

Was it not using Facebook data effectively ?

Was CA that pivotal? Was it fake news or better microtargetting?

If both sides used it why did one side lose?


What kind of safeguards would you introduce?

At the very start of Facebook platform the API would anonymize the user's email address, the app would get an app-specific hash, e.g. abcdefg-app123456@facebook.net It was a working email address with Facebook handling the forwarding.

This proved futile as the very first thing that apps then did was to ask users for their real email address.


> What kind of safeguards would you introduce?

How about the ones they agreed to with the FTC in 2011 [1][2]?

[1] https://www.ftc.gov/sites/default/files/documents/cases/2011...

[2] https://www.ftc.gov/news-events/press-releases/2011/11/faceb...


To really fix this, Facebook will have to stop allowing 3rd party developers direct access to user data.

Basically, FB should introduce an App-Engine like platform where the backend of any 3rd-party application that uses FB data has to run on FB-owned servers. Developers of these applications would then ship their code to FB (similar to Heroku) and run in a sandboxed environment where they are not allowed to take data out at all.

That way FB can audit how the data is being used at anytime and kick out people who are out of compliance with their terms. If a user deletes their FB account, now all their data could be deleted from any 3rd party applications automatically. This is basically similar to the way the government handles classified data.


If it were some computation, aggregation or analysis this work, but a lot (I'd guess, most) applications might not fall into this category. How are you gonna present the data in a UI to users, if no data is supposed to leave the server?


The UI could be a web page, mobile app, etc just like FB.com. But to see any user data you would actually have to be authorized as that specific user. I.E. a developer could test their app by logging in with their own FB account and using their own data but would have no capability to look at the raw DB entries of other users.

Or perhaps their could be a limited capability for developers to log in to their app as another user of it, but those accesses would be logged and periodically audited (just like the as-another-user-logins of regular FB engineers are).


Hm, the catch here is something like this could happen:

  +--------------+        +----------+        +--------------+
  |              |        |          |        |              |
  | other server |        |  Server  |        | other server |
  |              |        |          |        |              |
  +-------+------+        +----+-----+        +-------+------+
          ^                    |                      ^
          |                    |                      |
          |               http request                |
          |               containing user             | 
          |               data for UI                 |
    http request made          |               rouge http request
    with a legit purpose       |               sending the user data
    say fetching assets        v               to some other server
          |            +-------+-------+              |
          |            |               |              |
          +------------+  mobile app   +--------------+
                       |               |
                       +---------------+

And there is really no way of knowing which requests are for legit app purposes and which would be leaking data. Now, you could require that the mobile app make no requests to any service but Facebook, which brings two questions:

1. can facebook do this? Apple could; they approve apps and get the binary/manifest before the app is released. But how could Facebook enforce this?

2. would developers be ok with this? Relying 100% on facebook?

It is an option, I'm not saying it's infeasible. An interesting idea for sure, thanks for sharing!


I guess you could let the mobile app access third-party servers but require all traffic to go through a proxy where Facebook can examine it. (Either by letting Facebook handle all the encryption or giving Facebook a copy of the key.)

All of this tends to trade one problem for another, though: the user no longer has to worry that the third-party app has access to their Facebook data. But now the user has to worry that Facebook has access to their third-party app data.


What tool did you use to make that schema?



yup, that's the one.


1) agreed - for mobile apps to maintain some semblance of data sandboxing, this would require FB to work with Apple. I'm imagining some kind of new iOS "Secure API" where the return values from certain API calls are marked "tainted" and then Apple uses static binary analysis to reject apps that write the tainted data to non-whitelisted socket calls.

2) Developers simply wouldn't have a a choice in the matter - FB is so big that they can force the market in a certain direction if they want to.


Bits can’t be colored... http://ansuz.sooke.bc.ca/entry/23


> But to see any user data you would actually have to be authorized as that specific user. I.E. a developer could test their app by logging in with their own FB account and using their own data but would have no capability to look at the raw DB entries of other users.

This is the status quo as of the 2014 platform policy changes.

The entire debacle is around the data retrieved (and saved) prior to that, which is what Cambridge Analytics has done.

> Or perhaps their could be a limited capability for developers to log in to their app as another user of it

FB Developer app has support for test users, who are marked as such.


> the backend of any 3rd-party application that uses FB data has to run on FB-owned servers

Would that work for mobile apps?


The backend of the mobile app would run on FB servers.


Most apps that do "Login with Facebook" also support "Login with Google" and sometimes "Login with LinkedIn". The backend ownership issue becomes a bit more complicated.

This is even before we get into trust issues most developers would have with Facebook having access to their user data.


The portion of the app that deals with FB data would be hosted at FB, the portion of the app that deals with Google would be hosted at Google, etc. And this is only a problem for applications that pull user data from FB itself, if it was merely using FB for authorization but wasn't actually using any FB profile data, photos, friend lists, etc, than it could all be hosted on a 3rd party server.


So most developers would say "But my FB backend is just a simple proxy / transformation layer that parses the data and sends that to the front end".

Most would be correct - some flows like inviting friends to an app, or showing friends who also happen to use this app (for games, etc.) are so common that they're integrated into the Facebook SDK.

For performance reasons (both on Facebook's end and for the sake of user's experience, just in case his phone moved into a zone with inferior coverage) a fat network request was preferred to multitude of thin small ones - you couldn't really predict when the user would fancy inviting their friends, saving their score, looking at their achievements, or checking out friends who are in same town as them - so a 24-hour cache policy was instituted.


They could aggressively pursue enforcement and punitive measures. It's not about preventing everything that could possibly go awry, but making it well-known in the developer community that Facebook/Platform has no problem shutting you down if you egregiously break the rules.


I think that's sorta kinda was happening with

> we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.

To generalize the issue, if you were in charge of APIs at some company A and some company B (not necessarily located in your jurisdiction and not necessarily subject to the same legislative framework as you) told you they used to have the data, but deleted it since then, what additional measures would you recommend company A pursue?


In that situation, I guess there's not much power Company A has with regards with Company B, right? Unless Company B is worried that they'll have a future relationship and thus will be willing to submit some kind of audit.

Zuck's announcement seems to have a decent outline -- start with investigating apps with access to large amounts of data, audit apps with strange behavior. But after a year, after the CA (and other) controversies are forgotten, what I was thinking was that FB should be doing regular, random audits/investigations, and publicize the punishment.

I don't mean identifiable shaming, e.g. "Last week, we banned Jane Smith and her Flappy Farm app for misusing the data of 3,000+ users". But maybe weekly/monthly tallies of apps that were shut down or sanctioned, and a breakdown of the reasons why, and users affected, etc. Every once in awhile, an app maker might post a "We Fucked Up" article on HN, which helps even more in reminding people of TOS.


From what I've seen, tech companies tend to be understaffed on enforcement and compliance, relative to their user reach. Was the "certification" anything more than a checkbox in an emailed Word doc? I expect that's one area where they could improve their diligence. I would argue that if you have a high volume advertiser who has already broken a TOS and continues to use the platform, that type of hands-off certification is deliberately negligent. Anyway, we'll find out more details from their FB account manager in the coming months.


Then make "no requests for users email" part of the terms of using the API?


Sure, and maybe they'll put it right next to the "you're not allowed to save any data" clause.


One is much easier to inspect and verify than the other.

There's really no way to check if someone has made a copy of data if you've given them that data.

It's not that hard to check if an application prompts a user for an email address.


One is fairly trivial to prove compared to the other.


But there are many legitimate use cases of sharing email addresses. Just scratching the surface,

* someone used Facebook Connect to login to NYTimes Web site, curious about their Food & Recipes newsletter, wants to sign up

* someone logging into e-commerce shop through Facebook Connect, making a purchase and then deciding that yes, they would like to track and manage their order on the retailer's Web site, and they will even sign up for an account with retailer to do that


Neither one of those use cases would necessitate using the user's real email address instead of the forwarding address that Facebook provided.


Which is fine. But most apps that I remember of this type (back when I used Facebook apps) would ask you for it on landing. Nothing ever has to be black-and-white.


Which they flagrantly violated...


How about allowing pseudonyms for a start, like the original Internet before FB corrupted it?


"What kind of safeguards would you introduce?"

I think the only sane response would be to shut down the developer program. I doubt it contributes much to the FB bottomline and it's clearly something FB doesn't care much about given the breadth of this scandal.


That's true now. A decade ago, being a development platform was their main goal.


2011 FTC hearings were about the exact same topic. You can't trust the 3rd party app developers. Back then it was social game developers selling user profile data to Rapleaf and other data brokers.


> That's the only hole in his statement.

What about the punishment to Christopher Wylie, by closing/suspending his accounts in facebook, whatsapp, etc ?


> Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services.

He was part of Cambridge Analytica at the time. So they suspended his account along with the rest of them I suppose.


> What about the punishment to Christopher Wylie, by closing/suspending his accounts in facebook, whatsapp, etc ?

The Christopher Wylie that, by his own account, was a knowing, active, and key participant in the things they punished CA for, and in fact claims to be the one who came up with the concept for it?

What about it?


It's in Facebook's benefit for advertisers to gather all that data, because the only way they can actually use it to make money is by advertising to the users on Facebook. I find it incredibly hard to believe that this thought never crossed anybody's mind.


Have you ever looked at FB's ad platform? You don't download everyone's data and target the campaign yourself. You target, "18-25 males in these zip codes who like the yankees". I don't see how you go from that platform (hosted and controlled by facebook) to something else.


At least at one point, you could. See the example of the person who pranked his roommate through very specifically targeted facebook ads. [1]

It's even mentioned in that article how to get around the fix they put in so you couldn't target a group of less than 20 people.

I'm not sure if this particular method still works, but let's take a step back and think before we make claims about what is and is not possible in a complex system with lots features and knobs to twiddle. Whether this was an emergent feature from other features of the system or a specifically desired and designed behavior, at least at one point Facebook allowed very fine grained targeting.

1: http://ghostinfluence.com/the-ultimate-retaliation-pranking-...


That still doesn't really sound like the same thing. Highly specific targeting based on PII you already have is a very different prospect than harvesting PII for tens of millions of strangers.


I concede you might be able to target a single person, but how exactly is that stealing data? It would sure take me a really long time to (one at a time with ads) get all that juicy-juicy data.


There were black hat scrapers in 2013-ish that enabled affiliate marketers to enter a FB page or public group,and export all the commenters' and likers' names, handles, numeric ID's, locations, emails, ages and a few other things to a CSV.

I assume their scraper was more in depth. If you had friend-type permissions back then you could perhaps see their posts and shares. You'd need post and share content to run text analysis, figure out their hot buttons based on language and frequency, sort them into groups and then target then.

Affiliate marketers could also upload lists of unique user ID's (like the FB group members scraped above) for specific ad campaigns, y'know, disguised hookup sites, skin cream credit card rebill offers, etc.

Back then, many of the privacy options were deeply hidden, obscure, changed names a couple times a year or were unavailable.


I wasn't entirely clear. I was really just responding to the point that you couldn't target people yourself. It wasn't meant as a refutation to your point, just a clarification of one aspect of your point.

That said, since you can target specifically, if the system did allow some way to exfiltrate user data, it would make financial sense for larger analytics companies to do so to provide value-added services where they could target much more specifically than Facebook intended (or at least intended to make obvious?).


The apps being able to siphon 10s of millions of user's data makes what you're saying you can't do possible though: you use an app to get everyone's data, then you analyse that and work out the groups you want to target and with what. Then you realise that the best way to deliver those ads, given your data is Facebook profiles and all your targeting categories are from Facebook, is with Facebook ads.


> They knew it was illegal but put all the incentives for companies not to follow the rules.

Offtopic but i cant help it. You get what you measure. Which is why economies that only measure profit optimize for nothing but profit. When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit, we should not be surprised that companies like Facebook do awful things. the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of) https://en.wikipedia.org/wiki/Generally_Accepted_Accounting_...


> When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit

We don't need accountants to measure legality. That's what we have law enforcement and courts for. Investors care about profits; behaving illegally should hurt profits. Deputising a multi-billion dollar company's thousands of shareholders as its moral police is an absurd proposal.


Law enforcement and courts aren't funded in a way that makes that effective.


> Law enforcement and courts aren't funded in a way that makes that effective

In the 1930s, the Congress realized that financial crimes were (a) prevalent, (b) serious and (c) difficult to investigate and prosecute. So it created the SEC [1]. Its specialists, with the budget, focus and mandate to pursue securities-related violations, have been effective (relative to pre-1930s finance).

Regulators make rules. They also enforce them. We have no top cop for technology. The costs of that gap are becoming apparent.


Average US citizens have no comprehension of 1930s America, FDR, etc. but the same people will instinctually glorify the 1950s as a time when traditional values led to economic prosperity.


> the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of)

How do you record company/moral values into Books of Accounts? How does one audit those morals? What category do you assign it under: Asset, Liability or Owners Equity? And moreover, what would happen if morals change and some are deemed obsolete? Also, if you start to measure company values then it becomes obvious that other things that have been kept out of purview of the Books would want to have an equal footing too; like: legal contracts with clients, employee agreements, court cases, so on and so forth.

There is a very good reason why accounting standards (like GAAP or IFRS) decided to only record transactions into the Books and not be concerned with legalities or moralities. It's impossible to assign an amount to a company/moral value as what is valuable to me as a shareholder may not be considered valuable to say the local tax authority or an would be investor and vice versa.


Yes we all know that. To even start thinking about how to quantify values — and if you follow macro like i do, you know a modern economist can probably quantify anything — first we must inject values into the conversation. Right now, the world doesn't have any values. Except profit, competition, winning, domination and control. It's not even efficient — it's a prisoner's dilemma stuck at backstab/backstab. How do you solve a prisoners dilemma? ... shared values! (“thou shalt not kill”)


Yes you are right that from an economic standpoint you will be able to quantify anything. However, GAAP/IFRS was specifically created to handle only a subset of that whole gamut viz. recordation into Books of Accounts and generation of Balance Sheet, Income Statements and Cash Flow statements. There are commissions in place to handle fraud and deceptive practices for things that cannot be codified into the Books of Accounts. For instance, the Anti-Monopoly Acts enacted in various countries have safeguards that prevent a single entity from monopolising the market. However, if you try to codify this into the Books you are going to have a tough time when the law changes. You can't go back in time and retrospectively change your ledger transactions (which is static) to reflect changes in law (which is dynamic) or values/morals (which is also dynamic).

Please note that I am only replying in the context of GAAP (which you mentioned). If not, I agree with all the ideas you presented and only disagree on the one aspect of it being added to the GAAP/IFRS standards as the purpose of it's creation was to precisely steer away from unknowns and only record the knowns.


The parent comment said "Then measure values!", but didn't specify how. That was left as an exercise for the reader to imagine.

Mashing value metrics in to accounting practices seems problematic at best.

On the whole I think it's a tricky subject because we want to be careful not to stifle innovation, and sometimes problems only become evident after quite a few pavers have been laid on the road of good intentions.


Aren't intangibles exactly that, though perhaps a little broader than specifying particular moral behaviours?

Goodwill, Brand (which contains value statements), IP, etc, they're given financial value. Perhaps these don't influence shareholder/public actions as much as they could, but the measures are there at least.


> Aren't intangibles exactly that, though perhaps a little broader than specifying particular moral behaviours?

Intangibles can be bought and sold as they aren't attached to anything emotional. Moral behaviours/values, if embedded into the books, have to be through a transaction. How do you transact moral behaviour/values? That is the question I have.

> Goodwill, Brand (which contains value statements), IP, etc, they're given financial value. Perhaps these don't influence shareholder/public actions as much as they could, but the measures are there at least.

Accounting principles state that for anything to be recorded in the Books a prior transaction should exist. Brand and Goodwill, by accounting principles, are only recorded after they are transacted the first time. What that means is: Say you start an enterprise. The enterprise over its lifetime acquires a Brand value. However, you cannot record that Brand value until the enterprise is sold to another entity. Only in that scenario, can the buying entity record it into it's Books as Asset.

EDIT: IP, Copyright or Patents on the other hand can be recorded as Intangibles because you "bought" it from an issuing entity (the Government or any other body which is issuing you the certifications in exchange for a monetary value). Hence a transaction exists prior to the recordation in the Books which has valued the asset.

EDIT: To explain better: The reason you cannot record a Brand value into the Books until it's either sold/acquired is primarily because there is no way to gauge the value of Brand/Goodwill. I may consider my enterprise Brand value to be a million dollars. But you might consider it to have no value. Unless a transaction occurs, a value cannot be arrived at as it inherently has no value. Hence the recordation in the Books happens only and only after a transaction takes place.


This is a good post. 1/ I am not sure if it matters that social good is hard to value. Even a sloppy metric, even just a boolean – e.g. environment neutral vs negative – serves the purpose, because by simply existing, people can point at it, tweet about it, start to ask questions, apply peer pressure. A couple good catch phrases is enough to get elected. 2/ I think we probably will figure out how to value things like this in the future. A friend of mine suggested that applying options pricing theory might be interesting - air pollution doesn't have a value today, but it could have a very high value someday in a future worst-case scenario. But that's all above my head and I see that you are much better versed on accounting than me.

(I see now our replies crossed so will leave this and stop posting :)


Thanks for the explanations. On the transaction front, what about share trading activity based on Value statements or actions that impact Brand/Goodwill etc? Though the sale of shares is a future transaction, a company could reasonably conclude something like:

We will make X and Y the value statements of our company/brand. If our actions reduce the value of X and Y statements, what will that cost us? This at least would provide a 'rough' starting value, to be reviewed/measured against market response. Do companies already do this? I'd think it important for service/advertising based companies (but I am guessing, I have only couch-potato knowledge).


Here is a one attempt at using environmental, social and governance (ESG) integration factors to create an index of companies that are measured beyond just the basic plain vanilla profitability/market cap metrics:

https://www.msci.com/esg-ratings

https://www.msci.com/research/esg-research

Recommended watching the following TED video to introduce the topic: https://www.ted.com/watch/ted-institute/ted-state-street/aud...


Slightly offtopic too, but I would love to hear suggestions for good books on economics and/or philosophy that would discuss non-monetary profit and values.

https://www.theguardian.com/sustainable-business/2014/oct/01...


An alternative approach I’ve long been fond of is to lengthen time horizons. In the long run, bad behavior is much more likely to come back and bite you. If you can get incentive structures on a longer term time horizon, you’re much less likely to engage in risky behavior that might have short term payoffs but sinks you in the long run.


Here's an interesting paper by some Columbia grad students that I've been meaning to read: http://sustainability.ei.columbia.edu/files/2014/07/Navigati...

Also check out the MSCI links in my other comments to this thread if you have a chance.


One of the key points of the Z post is that for an app to be able to request permissions from users, the app creator will need to sign a contract and be subject to an audit.

This appears to solve the issue of having wide permissions, but it does not do so. In reality, this is an attempt at transferring facebook's risk to shady app developers, while the overall lifecycle for the app won't change.

In essence, this is a do-nothing from the standpoint of app developers who have requested additional permissions. Any app developer who is told they need to undergo an audit, due to transcribing the entire social network, can simply say no and get their account banned. It will likely have no effect, as the account will almost certainly have already been suspended in such a spot.


WRT to FB's knowledge of this happening: Your assuming bad intent is no worse than FB's assuming good intent. Otherwise, one of the more reasonable FB comments on HN.


Why can’t they hold on the data inside Facebook and ask developers to come in a VPN to a VM or Remote Desktop (which gets recorded always ) and then use analytical tools installed on it , to work on it with no internet access in a DMZ . By this way they can record everything the developer is doing with data and the worst case he can do is take screenshot. By this way no data goes out of their hands


Yeah it’s like “oh gosh, they violated our terms of service by pulling 50 million users info! We must send them a sternly worded email with a checkbox to confirm they they won’t do it again!”. It’s inconceivable that there aren’t larger exfiltrations of user data that have taken place - how would they even know?


I feel like they really doubled down on that by naming Kogan.... "see, it was just one bad developer"


I'm guessing teenaged Mark always left a porn site saying "aw shucks, I'm not 18 yet, it tells me here I can't enter.".


So are you cheering for more regulations


Facebook did not intend for this to happen. That is such nonsense.

They intended for cool apps to go viral across their social graph so Facebook could be a “social utility” and the operating system of human relationships and other airy fantasies they spouted in 2012, 2013, 2014 when they built the app platform.

Their hopes of a beautiful future of joy and freedom were dashed when they discovered humans are capable of garbage behaviour.

Ironically they believed the walled garden of Facebook would clean up the cesspool of blog comments. Oops.


>The trusting developers not to sell any data but putting zero safeguards in place to prevent this and extremely punitive repercussions despite being repeatedly told by the public, media, and even high level employees tells me Facebook can't plead ignorance to this and they not only knew this was happening, but they probably intended for it to happen.

Why would they intend for it to happen? They didn't make any money off of this exfiltration, CA paid people via Mechanical Turk to install the application so that they could mine their data. Facebook didn't get a dime. In fact, they have a monetary interest in preventing this, because their data is worth something and these guys just got it from free usage of their API. So the insinuation that Facebook wanted this to happen, or looked away because it benefited them, makes zero sense.

What kind of safeguards are you imagining? How do you have 3rd parties interface with Facebook without letting those applications reason about the information within a Facebook account? Tinder is valued at over a billion dollars and it's not possible to use it without a Facebook account-- should Facebook shut that down and ban the entire concept of 3rd party Facebook interaction?

I do not understand the anger. They did nothing wrong. I can't believe that people are legitimately arguing that users shouldn't have a right to expose their information to apps.


I think it is ridiculous to think that Facebook is sharing data to 3rd party sources out of charity. If you provide more value via data to integrating applications than any of your competitors, you will continue to have the largest share of the market. So yes, there is profit incentive.


Many people find various apps on Facebook to be useful. That makes Facebook a better platform which makes Facebook more valuable. Of course there's the profit that comes with making their product better. Heavens forbid!


They sold the data by proxy by knowingly letting the 3rd parties syphon the data. Why else would they be at CA when the cops showed up? Are you being intentionally obtuse?


They let third parties access data but where are you getting the idea that they “sold the data by proxy”? There is no evidence they wanted the data to be stolen, or that they willingly allowed it.


They were at CA because it's a multibillion dollar company and that means that they have people who keep tabs on stories that result in millions of dollars of lost stock value. It's not because they were in cahoots with CA, they were covering their bases.

Are you honestly suggesting that CA wrote Facebook a check?


If they're serious about this, and I'm leaning toward they aren't really serious about it, this will affect a lot of startups, specifically in advertising, but probably a lot of mobile apps as well:

“First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps.“

Might be a too little, too late attempt at self regulation since they've apparently known about this cambridge analytica situation since before the election and since then there have been facebook employees embedded with CA to help them target ads.

MSNBC is already playing this as: 1) starts with a denial 2) admits wrongdoing 3) claims behavior will change 4) changes are not carried out and we're in the same place a year down the road.

They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.


I worked on an app eight years ago for a woman that went to Harvard with zuck. I only mention that because she did. Every. Day. Eventually they had me do a Facebook integration and when I saw how easy it was to escalate the permissions I was horrified. I was an inexperienced developer at the time and wasn’t calling the shots. We saved everything. The lead devs laughed about the policy that straight up said it was our responsibility to delete the data after use. That would be inconvenient! The system was designed for this. There is no way for them to do accounting on this issue. This blog is a farce. The company and project doesn’t exist anymore. I’d bet money that more than a few people have that tar ball.


> They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.

Worth noting that Zuck wants something from Xi and Medvedev, has everything he needs in terms of support from the US Gov.


Facebooks real existential threat is just that . He thinks that having everything he wants now is correlated to the future. He should call his bud Billy Gates and see how not playing 5 moves ahead with the government worked out.

Facebook as almost insured that it becomes the whipping post of the FAANG companies as the government wants to look hard on tech. Im genuninely not sure what Zuck can do as long as Apple, Google and Amazon don't make mistakes in the same way.

The next 10 years are going to be a lot like the old ad age about being chased out of a campsite by a bear. It's unimportant to be the fastest (best) of the group, you just cant be the slowest.


>He should call his bud Billy Gates and see how not playing 5 moves ahead with the government worked out.

It seems that it worker out for Bill Gates pretty well.


Yea, the Microsoft anti-trust years would love to have a sit down with you.


There's a lot of innuendo that he was forced out after the second anti-trust (There was an initial slap on the wrist which he flaunted).


I'm leaning toward they aren't really serious about it

I've said variations on this before: https://news.ycombinator.com/item?id=16438362, but FB will get serious about it when users stop using it.

Until that time, everyone can complain, but the concept of "revealed preferences" is relevant. Do people actually care? If so, they'll change their behavior, FB will likely notice, and changes will happen.

People have been complaining about FB and privacy since practically day 0. Throughout that entire period, FB has only become more popular.


"They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead."

I get the feeling, but you'll agree that a legally binding legal procedure that is an actual existential threat to your hundred billion dollar company that employs 25k people requires a different approach then a seduction meeting with a despot to try to loosen regulations.


> but refuses to appear before the US Congress and sends the company counsel instead.

This strikes me as especially interesting. I mean, I'd personally theoretically have my reservations about this congress over and above the average congress, but FaceMark refusing strikes me as a deeply telling datum about how he's thinking about this.


Basically no CEOs ever want to testify in front of a congressional committee. There is only downside to such a situation for the company. There is zero upside.

This is not a deeply telling datum. It is a boring an standard one.


While that may be true for regulations/ethics related appearances, this is not true for things such as funding. For example both Elon Musk and Bruno have gone before congress multiple times to discuss private space programs/ contracts and to justify their cases. Admittedly Elon did send Gwynne Shotwell several times but she is still the COO of the company.

More accurately there is risk in talking to the US congress when the topic sounds more like an inquisition than when congress is asking for opinions.

The matter then is why is it a risk for Facebook to discuss the CA issue? Are they worried about a witch hunt or a public ethics execution?


In this case there is user trust to be gained. By refusing to appear in front of congress, mark and facebook has lost a little of my trust.


Did they have any before?


testifying in front of congress is just an opportunity for politicians to win points by kicking you around like a ball. very few people (let alone CEOs) stand to gain anything from it, and corporate counsel is paid to put up with abuse.

> ...my reservations about this congress over and above the average congress...

ugh, come on. it's not like the whole congress votes on how you're to be treated, and the questions you'll be asked. the biggest heels on both sides of the aisle are free to harangue you all they want.


> it's not like the whole congress votes on how you're to be treated

Not the whole Congress, but how exactly do you think procedure and parameters for hearings are set and objections during hearings are resolved? Both the specific personalities in leadership positions and the attitudes of the majority matter a lot.

EDIT: It's true that there is a tradition of providing some semblance of balance in committee process, including hearings, with the majority (not necessarily by party) mainly controlling what items are considered and what hearings are held, and ultimately the outcome; but not only is partisanship greater than in the past, but precisely those traditions have noticeably weakened over the last couple decades and particularly in the current Congress.


> are free to harangue you all they want

Any exec hauled on the carpet is going to want a drink afterwards. That's not the concern.


Anyone else remember Beacon? This was how FB has always designed to work from the beginning. They've just been toying with the PR ways to say it to make people accept it without thinking.

The election woke people up.

I found this from 2011 when they shut it down as a "mistake." (https://newsroom.fb.com/news/2011/11/our-commitment-to-the-f...).

Unfortunately, it looks like they removed the launch release - but it would be interesting to see how it was presented in light of the recent news.

They're not a product company, they're a distraction company.


"They're not a product company, they're a distraction company."

Correction, they are a surveillance company. Need I remind everyone of Google and FB's In-Q-Tel CIA partners in crime? They just figured out a way for everyone to willingly report on themselves, but not just on themselves, on others too! FB is bad and should collapse like every other dotcom boom-bust that uses and abuses it's users... but the difference is the level of monopoly on non-technical users that didn't exist in the 90's. Back then the technical community could have dropped a product like hotcakes and watched it bust... but due to the increase of non-technical users who sign any EULA/TOS and don't give a crap about privacy... I expect nothing will happen until something really bad and at a massive scale happens.

For those who are younger, consider this the slashdot/digg/reddit cycle. Reddit will die next the closer it gets to IPO too.

It's the beauty of computing though. Every market is ripe for disruption if someone has a good vision and follow-through. The problem is that so many of them use the exact same model and a few years later are the ones dying due to lack of integrity.


> The problem is that so many of them use the exact same model and a few years later are the ones dying due to lack of integrity.

Can you substantiate this claim of tech product companies getting large/successful and then “dying due to lack of integrity”?

I haven’t noticed the pattern but if there is one I’m sure interested in the evidence. (And in what sense do they lack integrity?)


Yes! Wow, that was a long time ago. FWIW, it looks like Archive.org has a copy of the press release:

https://web.archive.org/web/20080214193303/http://www.facebo...


What's the tl;dr: on Beacon?


>Beacon formed part of Facebook's advertisement system that sent data from external websites to Facebook, for the purpose of allowing targeted advertisements and allowing users to share their activities with their friends. Beacon would report to Facebook on its members' activities on third-party sites that also participate with Beacon. These activities would be published to users' News Feed. This would occur even when users were not connected to Facebook and would happen without the knowledge of the Facebook user. One of the main concerns was that Beacon did not give the user the option to block the information from being sent to Facebook.

https://en.wikipedia.org/wiki/Facebook_Beacon


Was this the thing that would put up a notification on your Facebook profile that you just bought a dildo from Amazon (for instance)?


Yep. What was even more bizarre is how they never really acknowledged it might be a problem, they just keep taking this arrogant attitude that the plebe users are wrong and just don't understand this amazing new data feature. It was the first time I recall them using analytics to justify their bullshit, it pretty much set the tone for where we are at now.


I really hate that. It's a specific deal between Facebook and Amazon. You can't "block" it. Amazon willingly send your purchase history to Facebook.


I basically quit after Beacon.


This was how FB has always designed to work from the beginning

Not exactly. In all fairness, as Zuck points out, this is a key part of the story, and in theory, why this is different:

In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent.

This raises the question of what Facebook was doing (if anything) to prevent this sort of action, but the fact that they just took CA at their word that they deleted this ill-gotten data (of course they didn't), it makes me think they did very little. I think this is just as concerning as any other part of this story. Even if people are knowingly willing to hand over data to Facebook (or the devs of some app) in exchange to use a service, they wouldn't think that it's a free for all and anyone can mine the data for whatever they want.


> It is against our policies for developers to share data without people's consent.

Not to mention that it's arguable that "consent" was really given for facebook to share the data in the first place. I'd be interested to see some polling results asking if facebook users knew what facebook was up to and whether they feel OK with it.


I'd also be really interested to see screenshots of what the users saw when they clicked "ok." I've been able to find a few screenshots online for other apps, but nothing that indicates what it would have looked like in 2013.



Nice. This doesn't seem to mention sharing data about your friends though? I wonder if that would have been mentioned separately to grant access to your friends data?


That one is pretty vague. Here's another that I found that's a lot more explicit. This one is definitely from 2013.

https://photos.app.goo.gl/od8XBXMP1YoacpMo2


It mentions "list of friends", but not what kind of data is attached to entries in said list.


These fake consent (EULA, signup clauses) need to be shot down once and for all. Consent needs to be visible, granular, and explicit.


These have existed for a long time, on paper contracts.


I found the old permissions request. Seems fairly clear to me, but I doubt people really understand the consequences

https://photos.app.goo.gl/0z3F0BH9Q0wNK0nz1


Thanks for playing into the story that Facebook created to set the conversation.

This was openly covered last year by BBC when they interviewed Trump's digital campaign manager at the time Theresa Hong [interview linked]. The campaign spent $16M on Facebook. Understandably, Facebook gave them the white glove treatment, even had their own employees embedded in Project Alamo (the headquarters of the campaign's digital arm).

But today Facebook claims they had no idea who one of their multi-million-dollar clients in 2015-2016 were. That it was just some random quiz-making hacker dude selling data to some other random company.

https://twitter.com/bbcstories/status/896752720522100742?lan...

This piece of work posted today by Facebook is what we call damage control. Don't expect the truth from it-- it will contain truths, but it will not be the truth of the matter. And don't let it set your dialogue, man.


1000 word statement.

Number of times the word "advertising" was mentioned: 0

Facebook continues to pretend it's business model is unicorns and kittens, not selling user data for money.


It was standard crisis management PR sermon.

"learn from this experience", "doesn't change what happened in the past. ", "responsibility", "going forward", "together".

You find this same boilerplate from athletes who beat their wife, do drugs, or kill people.


Exactly this; no one is sorry, they're sorry they got caught after the fact. This is simply an attempt at ass covering and shifting blame onto Cambridge Analytica so users / investors / governments don't sue Facebook.


You're absolutely right. This is a bullshit non-apology in which nobody is actually sorry for anything. It's crap and does nothing but try to shift blame.

Yet, maybe it serves a purpose? It doesn't matter how sorry they actually feel, nobody is going to feel better from a large, public, self-flagellating apology. Nobody is going to be happier or more mollified or satisfied. All it's going to accomplish is to provide grist for the lawsuit mill - after all, why apologize if you're not guilty?

Again, you're completely right. They're clearly not sorry.


Should Facebook be sued?


I want to say yes, but there is a problem with suing these sorts of monopolistic companies. If Facebook were to be sued, for lets say $100 million in a class action lawsuit, even if they end up paying, it ultimately won't change their behavior. They have enough money to pay that, take a small hit in their earnings for the short term, and then continue as if nothing happened.


I had a very hard time imagining Zuckerberg actually writing these words, it just felt too carefully crafted and full of standard PR tropes. The CNN interview should be interesting, then we can hear the actual words from his mouth.


The words he's been rehearsing in hours of mock interviews with the same PR hacks, for the last couple days.


I imagine he'll be answering questions provided to CNN by the Facebook PR team.


No need to create the paper trail, the "hard hitting"-looking questions any tv news outlet is going to ask are incredibly predictable.


I always see people say this after any PR crises... but how would you say it instead if not that? Is there a way to be truly creative and not shoot yourself in the foot at the same time?


He laid out what happened, and several concrete actions they're taking to prevent it from recurring. That's not boilerplate.


I think people are sort of doubtful they will really follow through whole-heartedly with what Zuck wrote, because that's sort of their MO.


Thoughts and prayers that our random business partners will behave.


...and go on to continue to beat their wife, do drugs...


Facebook doesn't sell user data for money. It sells targeted advertising for money. There is a huge difference.


Yea, actually selling their user data would put them out of business... Why buy the cow if you can get the milk for free?


You meant: why sell the cow once, if you can continue to sell milk indefinitely :)


Sell the premuim quality cow to buy two crappy cows. Sell their crappy milk dirt cheap to undercut the premium quality milk market. Invest, repeat, and race to the bottom.


Because cows don't provide milk indefinitely?

Because you get more money for a cow than milk over a long time period and can use the cash to expand the cow-selling business?


I think this thread has jumped over the moon...

(And that original metaphor was a bit mismatched, actually.)


2018’s 3P data and tracking tech no longer needs to violate FB policy to do what was done in 2013.

FB provides a platform for ads, sure, but it can be used for way more than just ads.

3P tracking can infer who (even specifically) is viewing ads and it knows the social graph by other means. It can also do effective mass psychometric tests. A/B testing infrastructure can be used for more than just optimizing ads ... all of the psychometric dimensions tested by Kogan’s 2013 app can be expressed as embedded imagery and messaging in ads, and the same types of tests can operate at the same scale by integrating 3P data and tracking (some of which certainly originates from policy-compliant FB apps) which already knows the social graph beyond what FB will allow a single app or ad to draw.


All I want is Mark Zuckerberg to say is "this isn't a community, we're a multi billion dollar company. We have a pretty cool website where you can do a bunch of stuff, and we mine your personal data (and sell it) to pay for it. Sound like a good deal? We think so."

But no, we've got to pretend Facebook is a touchy-feely community. I get the feeling that Facebook is ashamed of the way they make their billions.


Why do you have such a hard time believing that he is well intended?

You seem lost in a false narrative that he is out to get you :(

I challenge you to rethink your position assuming he means well...just for the fun of it...and share with us what your conclusion would be


[flagged]


Stop repeating this meme. It's not productive and doesn't help drive the conversation forward in any way.

He said those things when he was 19 years old and Facebook was still a random side project in college.

And yes people were indeed "dumb fucks" to give a random college kid with a side project their personal information. If you went around Walmart and asked people for their SSNs with no value proposition, then they would indeed be dumb as well.

The $400 billion company that Facebook is now and the growth a person goes through over decades of being a CEO and managing people is significant.

For you to bring up something the guy said when he was a kid is disingenuous and does more to harm any point you may have had than help it.


Exactly. He was 19, didn't have a army of PR/lawyers acting as filters between his mouth and brain. Now his words (especially crisis-time statements) are shaped by a team of professionals. I think I would pay more attention to the former to get an insight into his "real" thoughts.


Have you actually talked to a 19yr old recently? Serious question. Because you seem to lack serious perspective about how a 19yo acts, thinks, or behaves when adults aren’t around.

Also can you explain to me how people submitting personal information to a random form from a college kid is not dumb? Facebook the product didn’t even exist at that point by the way and the information collected was more like email address, nothing really more nefarious than that.


> Have you actually talked to a 19yr old recently?

Yes, most of my intake into the military were 18 and 19. As a society we consider them adult enough to send them into harm's way. I was the Old Man at 31.

Your defence seems to be based on the premise that 19 year olds are children. No aspect of law or society concurs with that.


Law and society at one point were ok with kids under 15 working in mines and factories. Just because we consider 19 year olds right now 'adult enough' doesn't mean that this won't change with e.g. new scientific discoveries. For example neuroscientists believe that your brain is fully developed around 25 years old: http://www.businessinsider.com/age-brain-matures-at-everythi... (random link but there is more literature on this topic)


It's not really selling it's user data for money; it's selling access to it's users for money. Sure, user data allows some advanced targeting, but the reason they make the profit they do is that people are buying ads for people to see, not data about them. That's an important distinction.


Most people are buying ads for people to see. Some others, however, are using the platform as access to user’s extensive 3P data, once they click and are shuffled through multiple shady ad exchanges gathering and selling data, including the social graph.

Some others, like CA and ilk, also get easy access to user’s psychometric data, by embedding the psychometrics into A/B tested ad content.

The adage “you are the product” has only become more true as FB has advanced, whether by their intended or explicit policies or not.


Facebook has cut down more and more on that as they've decided to be an advertising company instead of a platform. The vast majority of their data issues come from pre-2015 when they were still experimenting with their business model.

Honestly, at the moment, Facebook has huge economic incentives to keep as much data to itself as possible. Facebook being the only place where you can microtarget to such an extent is a huge moat around their business.


Even with their data policies, FB (and any ad platform) enables massive data-mining by the third parties which host the ad exchanges and destinations.

And microtargeting can be abused (or just plain used, depending on your perspective) to infer only additional data by incorporating it into the 3P analysis once users click and start loading non-FB content. Microtargeting by its nature leaks information about the target segments ...

Figuring out the data FB uses to microtarget is simply a matter of buying enough ads, or getting in the middle of enough campaign-to-user relationships (as a central 3P ad exchange or tracking service).


I think there are some pretty large differences in scale between "buying ads to target people with specific attributes and saving the situations where they interact with your ad" and "enabling any application to view all of the information about any friend who has connected with the app user".

I wouldn't argue that micro-targeting can't end up with very specific privacy concerns, but I don't think it's nearly the same scale as "you should probably assume that if you signed up to enable the Graph API on Facebook all information about you prior to 2015 is available to people you probably don't trust".


It’s a different scale only in the timeframe of a single campaign. 3P trackers have an *ongoing central vantage over many campaigns, giving them data which they accumulate and sell to each other, causing leaked “private” data to accrete and spread in essentially a viral fashion.

The resulting dataset over even short periods (< 1 yr) time is comparable to a total datadump, including an accurate social graph. A “very specific pivacy concern” it is not.


They are selling market making for an ad market. Any market is a trade of both value and information.


I disagree, because Facebook's incentive is to keep all their information to themselves in an advertising business. If their the only ones who have it, then in order to get the same form of targeting you have to pay them.


It’s not possible for them to keep targeting data to themselves.

Targeted campaign products leak data about the user’s targeted attributes by their very nature.

If you want FB’s targeting data, simply buy targeted campaigns and associate the target attributes with the users who click once they are on your server. At scale, the targeting data is transparent.

Big ad analytics companies and ad exchanges can and do basically sit in the center of many campaigns and slurp up the targeting data which is naturally leaking from FB by virtue of their selling capaigns based on those targets.

Whether they want to or not, they are selling their data.

Edit: Care to reply instead of downvote? Is everything above not true?


Interesting, that is same as the number of times "sorry" and "apologize" was used apparently.


Who doesn't know Facebook sells targeted ads?


People outside of the tech world, in general, do not really know what the "targeted" in targeted ads means though. They think advertising, they think ads in the NYT or on TV. People do "know", but unless you really think about it, or understand how it works, its invasiveness can be easily overlooked.


Yeah, I just took this to mean they were mad they weren't, from their perspective, properly compensated for such valuable data.


I like the part where he says, "We have a responsibility to protect your data, and if we can't then we don't deserve to serve you." Facebook doesn't "serve" consumers, it built an RPG that harvests user data and sells it to the highest bidder.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: