Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Did Facebook really do something wrong?
37 points by anilshanbhag on Mar 20, 2018 | hide | past | web | favorite | 27 comments
Users can use Facebook to connect with other people. In the process they share their information on Facebook. Facebook can use this information to categorize users based on their interests and serve them targeted advertising. Advertisers do not see who saw their ads unless the user interacts with the ad.

Now the platform itself is open and can be used by others in their apps to get user data. In such a case, the user himself approves sharing his data on Facebook with the app. This is what happened in the case of Cambridge Analytica. Users used a 3rd party app which collected data. They voluntarily did it. The app then shared it with Cambridge Analytica. The app developer got data from users and used it in ways he didn't disclose to his users.

Facebook makes close to no money from these third party apps. I have written apps in the past which ended up collecting user information which just sits in my database. I don't see how Facebook can prevent exploitation of this data as it can't be monitored. They could just shut down the pipe but I believe that will again make many people mad. The app developer is at fault.

Given above, I feel its the app developer at fault and not Facebook.

TL;DR Facebook advertising doesn't leak user information. Cambridge Analytica's approach would fail if people simply stopped using 3rd party apps.




> In such a case, the user himself approves sharing his data on Facebook with the app.

Most users are non-technical and have no idea what this really means. To users it means “if you don’t approve you won’t be able to use this”, so they do. It’s like giving candy to someone who doesn’t understand nutritional facts — all they understand is it tastes good, not that it’s also bad for you. Facebook and other companies know and take advantage of this.

Also, it’s my understanding the Cambridge Analytica app also pulled data about friends of people who used it, so even if you hadn’t consented to the app you were still scraped. At the very least Facebook is at fault for having a system that allows personal information of non-consenting users to be taken by third-parties.


I remember reading this couple of years ago and I guess the graph was more eloquent than anything else:

http://mattmckeon.com/facebook-privacy/

What's also interesting is to read comments about privacy from 8 years ago...


I don't think it's fair to put the blame on the company because users are naive enough to think that candy is perfectly fine and healthy to eat. People aren't clueless to the idea that companies are selling their information and their behavior is constantly be tracked. Was it Facebook's responsibility to display to show "Are you REALLY SURE you want to agree to this?"

If it already warned them prior to taking the quiz and they still said yes, I don't understand how they shouldn't be held accountable for this too.


I sort of agree. However note that when you pull data about friends, you don't get information about friends who have a higher privacy setting. Also, you can't get personal information about these friends through the api, however you could just web scrape it for many.


“Higher privacy setting” is kind of meaningless when Facebook seemingly goes out of its way to make the settings page hard to use. You pretty much need a PhD at this point to understand what all the toggles do, and even then it’s not enough because they automatically opt you in to any new feature. I remember at some point in the beginning when they kept resetting all my privacy settings because they refactored their features but didn’t preserve my old privacy setting... I just gave up at that point and stopped bothering.


When CA was operating, the API allowed access to data that you allowed your friends to see. Which is a pretty common data setting as most of us are on FB to share info with our friends: https://www.google.com/amp/s/techcrunch.com/2015/04/28/faceb...


Facebook definitely did something wrong. Facebook makes developers sign up to an agreement that states they won't pass the user's information on to other parties. Kogan did that and Facebook failed to stop him, and then failed again when they didn't follow up sufficiently with Cambridge Analytica. Putting in a framework to protect user's data and then failing to act on it means they failed to protect their users.

Kogan and Cambridge Analytica also did things that I would consider wrong, but that doesn't let FB off the hook.


CA's CEO is on the record specifically talking about setting up "encounters" with attractive women and/or offering bribes to "influence" politicians.

https://www.theguardian.com/uk-news/2018/mar/19/cambridge-an...

FB might not reasonably be expected to know about this, but the company still turned a blind eye to CA's abuse of user data - which seems a strange thing to do when this kind of abuse is illegal in many jurisdictions.

My suspicion has always been that FB's primary purpose comes from the political value of data harvesting, user monitoring, and voter influence. IMO there's always been a shadiness about the way FB operates, and it's certainly not a company I would trust with any non-trivial personal details.


What would be Facebook options to stop Kogan or Cambridge Analytica?

They already have the data, other than suing, what's possible here?


They could have banned CA from operating on FB much sooner.

They could have sued the developer in court to send a message to other bad parties.

They could have notified the users that their data was used improperly.


The reports have been so vague as to what Cambridge Analytica did its hard to know what they have or how they used it.

I feel like the political climate is so energized right now that everyone is freaking out instead of waiting for details and specifics. Facebook seems to be taking this very seriously and, generally speaking, I've always been impressed with Zuckerburgs ability to be introspective and make changes instead of digging in his heels.

I'll give them the benefit of the doubt for the moment while all this shakes out.


What exactly has been vague about what CA is accused of? You don’t believe or can’t understand how data from a purportedly academic app was used to harvest data from millions of user friends? Or how that data was then sold to a commercial entity?


Companies have been scrapping information off Facebook since they opened up the platform. We had to abandon some plans to use Facebook as a social mechanism for photo sharing because of new restrictions Facebook added in 2016. My point is, IMHO, Facebook has been continually tightening controls and adding more privacy features for years, so in this case, I give them the benefit of the doubt.


What doubt, though? They gave unconsenting users' data to third parties for years. It was policy. They knew what they were doing but didn't care.

(And I'm not sure what you mean by "scraping", but this was not anonymous web scraping, it was done through APIs these devs were given access to by Facebook.)


The problem wit this argument is that they ARE consenting users, they just don't care enough to know. From the facebook sign up page "By clicking Create Account, you agree to our Terms and that you have read our Data Policy..." On the data policy they say that your data is given to third parties. The problem is that common sense has been removed from our society. You DON'T need a PHD, as another poster said, you merely need common sense.


(Part of) the problem is a lack of transparency to the cost you pay.

>They voluntarily did it.

If we're going from voluntarist morality then it seems okay initially but there are two main complicating factors that I see.

(1) Facebook and this app appear to be free. In reality they have a cost but because that cost takes the form of something other than money, the monetary value is obscured.

(2) Even the data itself is obscured from the user. Users don't know what data they've handed over, how it's been collected, what conclusions have been drawn from it, who it's been sold to, whether it's been combined with other data sources, etc.

Even the most voluntarist among us would probably see something wrong with the US healthcare system where you can't know the price of your services until after you've accepted them. Social media data collection is even one worse step than that because you never find out what price it is you've paid.

A fix to this is radical forced transparency, just like what GDPR is going to do.


You don’t see how Facebook could prevent exploitation? After 2014, they restricted access to friends data after CA first blew up. But they knew in 2012, the Obama campaign had similarly abused TOS to harvest friend data and to perform analaysis and cross-referencing with (such as identifying friends who are also on campaign donor lists).

Saying that people could just write a web scraper anyway is a bullshit rationalization. The technical barrier is high enough that it would make it an extremely costly option, certainly more costly than all the other data sources that campaigns traditionally use.

FB may not have outright committed evil, but they sure didn’t seem to prioritize safety for their users. And that’s enough on which to consider judging them accordingly.


When you say abused, do you mean that the campaign did not violate it in 2012 but took advantage of the lenience Facebook allowed them? Thanks


According to what campaign workers told the NYT, the harvesting of data seems most certainly to have violated FB’s TOS, especially regarding the storing of data for reasons other than caching performance: https://news.ycombinator.com/item?id=16624506

But FB has the right to decide when to enforce TOS or not. It seems they never did so, not just for Obama but for other operators as well. Pretty sure most FB users would be surprised that their data was knowingly analyzed by a third party as thoroughly as the NYT article describes, with FB’s knowledge.


While the title is framed to get people's input; in the text you proclaim that the issue is not Facebook but the users. So, what you are asking is - I am convinced Facebook did not do anything wrong but prove me otherwise.

The problem with that there is no way to prove a negative. And you already have the biggest defense - "...but people shared their data willingly".

Still, here's my view on this - Facebook is designed to collect data but it also is designed to keep people engaged so that they share even more. They also make it easy for people looking to collect data.

Their whole business model is around data. So, I wouldn't be surprised if there is a leak showing Facebook specifically courting CA, just like twitter did with RT:

https://www.theguardian.com/media/2017/oct/27/russias-rt-rev...


I'll leave this here, I don't want to pressure HN into learning anything about privacy or Facebook's practices though: https://phys.org/news/2018-02-belgian-court-facebook.html


I agree. I find https://newsroom.fb.com/news/2018/03/suspending-cambridge-an... accurate about facts.

On the other hand, in Facebook's own words, "In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies". If Facebook learned this in 2015, why are they suspending violators in 2018? That part is blameworthy. But I agree the primary blame lies at Kogan, CA, SCL, etc.


Yes, the app developer is definitely at fault.

The question at hand is whether Facebook could/should have done something/more to avoid this kind of situation. I do not have a definite answer to this question, but one could imagine both technical and legal means that can be used to avoid such abuse of data.

The other question at hand is whether Facebook should be allowed to hold this kind of power. Again, I do not have a definite answer and if the answer is "no", I have no idea how this could be implemented.


> Facebook advertising doesn't leak user information.

It exposes too much data to advertisers/app, which most users unknowingly agree to. Which is still Facebook's problem and should be regulated. I think the downfall of FB has just started and Google is quite lucky that their efforts with G+ fell, because if they succeed they would be in same position.

Cambridge Analytica use of data is immoral and legality should be decided in courts.


facebook basically did nothing to ensure third party apps & developers were not violating its data access and sharing policies. most developers know it has always been pretty easy to harvest data from facebook by tricking users and once you have the data you can do whatever you want with it.

facebook probably didn't do anything illegal, but many people feel they failed to sufficiently protect user data from bad actors.

It the absence of laws or regulations to punish facebook, the only recourse is for users to leave the platform. this probably won't happen b/c most people don't know or don't care. but if we think its in the public good to protect this data we should seek to pass some regulations that require platforms like facebook to meet an acceptable threshold of data protection.

given the effects that harvesting facebook data can have on elections, its probably a good idea for regulators to step in.


One issue is that the older version of the graph API permitted third party apps to access your full friends list and data your friends had shared. So I post photos and statuses, and those aren't just visible to my friends but also to third party apps my friends are using.


The crux here is the difference between affirmatively doing something wrong and failing to do enough right. I'm not going to wade into the rest of the debate, but I think anybody giving an answer should try to be clear which question they're answering.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: