Hacker News new | past | comments | ask | show | jobs | submit login
An Update on Our Plans to Restrict Data Access on Facebook (fb.com)
166 points by robtaylor on April 4, 2018 | hide | past | favorite | 158 comments



None of these changes will do anything to fix the basic issue. All the following aspects were derived from watching the testimony Christopher Wylie gave before the UK House of Commons Select Committee. He's the CA whistleblower. It's a very illuminating watch - https://www.youtube.com/watch?v=X5g6IJm7YJQ

The problem is the foll:

[1] CA wrote a trojan FB app to derive psychographic user data for FB users. This let them determine for example how susceptible you were to misleading or fake news.

[2] They then used FB targeting to target these specific people at scale, pushing extreme fake news such as "Obama moving troops to Texas to ensure 3rd term". This is military grade psyops applied at unprecedented scale.

[3] These people, as they were susceptible to manipulation, would then convert at unusually high rates. He says 5% or higher conversion. Conversion was measured as an action like donating money or signing up for a mailing list.

In this way, the entire democratic process was corrupted. The issue is not that there were dirty tricks in the 2016 election. The issue here is that the existence of FB's app platform allows the detailed psychological profiling of millions of people on scale, and then allows them to be targeted at scale using that profiling. This is a clear and present danger to democracy.

Even with these changes, a rogue app such as CA's would be allowed on the FB platform. FB has no visibility into how app data can be used offline.


> In this way, the entire democratic process was corrupted.

Let's call this what it is: Propaganda. Same as it always was, just with more input tailoring & curating it.

The biggest problem I still see here is that people expect "truth" from random Facebook posts. It's a website where basically anybody can upload anything. It's 4chan with a few more rules.

Democracy works best with an informed voter base, but misinformation has been there from the beginning. I don't see how this "corrupted the democratic process" any differently than in the past where propaganda was being pushed through other common media outlets, and making money off catering to their audience's outrage and gullibilities.

Judgment of information is something where voting citizens need to be personally vigilant, no matter their sources. That vigilance includes recognizing echo chambers and lack of exposure to a breadth of ideas.


The problem isn’t necessarily “fake news”, it’s the ability to target it. This targeting subjects each person to a completely personalized information flow. It’s one thing to have propaganda be broadcast to all, where as a society we can discuss, debate and choose to accept or reject it. With hyper targeting that process can’t exist. How am I supposed to talk to my neighbours or colleagues about current events (or “fake news”) that I don’t know exists? Democracy does not exist in isolation. We can reject the lunatic town crier, but it is much harder when he knows you deeply and is whispering in your ear.


> How am I supposed to talk to my neighbours or colleagues about current events (or “fake news”) that I don’t know exists?

I don't quite understand. This reads really negatively to me, like you want to proactively judge what others privately engage in so you can "re-educate" them in case you disagree, and lack of access to that is a problem? That's a horribly authoritarian view, compared to simply broad exposure to shine light on the multiple sides of issues, to promote more informed judgment (which AI-driven content "optimization" specifically works against, as its own broader problem).

> We can reject the lunatic town crier, but it is much harder when he knows you deeply and is whispering in your ear.

Of course, on the receiving side of info, with somebody with deep knowledge of you trying to convince you of something, again that's vigilance and exposure on your part. We all have family members in this exact role, trying to convince us all of their viewpoint, with intimate knowledge of us. What's the difference if it's a 3rd party? What if it's you promoting your ideas, and how would you want any "safety" mechanisms affecting you?

The responsibility always falls on the person receiving information, not to police other's receipt of information. The latter ends up with people arrested for googling pressure cookers.


I am not getting the same sentiment that you are ceeling from the parent comment.

Healthy discourse cannot occur in the dark. How can I discuss an article that I don't know even exists?


If somebody keeps their own private matters or dealings, it's not your role or your right to pry there to ensure they're "right". Other than that, people freely share both ways about what they believe and discourse continues. The solution to "wrong thinking" is to spread "good" information, not to witch hunt for censorship.

The overarching social problem with social media platforms is the pigeonholing. Freely sharing and discussing is quickly segmented away. The marketing & political money is made on outrage and tribalism, and these amplify differences to the point of segmenting others off if there's any dissonance at all.

Less categorized places of discussion, where members share a broader forum or space, must be more civilized by nature. You have less cherry picking of engagement. You end up exposed to (and exposing others to your) offensive differences, and need to deal with that exposure. We get that outside of social media circles, and it's overall a more healthy environment. Consequently, more and more people are recognizing that social media is not a place to anchor their trust, information, and time, which is a positive change.


I agree with OP:

> Healthy discourse cannot occur in the dark. How can I discuss an article that I don't know even exists?

Hyper targeted informational warfare inflames tribalism. You suggest we "mingle more" about 20 miles away at the park. It might be "healthy", but no one will give a crap if the park is empty.

Blaming individuals for group manipulation is not the answer.


I didn't suggest mingling at the park. I suggest being in online places that aren't the hyper-segmented social media sites. Even here, there's a big sorted bag of articles, each with a big sorted bag of comments. Everybody engages with the same bag of stuff. We don't all have our individual view of the world; we see the entire world of HN together, depending on the time of day we visit. That's healthier for diverse discourse and diverse exposure than a pigeon-holed, AI-enforced separation of only your "targeted interests". Of course, you can argue the mob selection bias of articles here, but it's just 1 mob, not a system which enables an unbounded set of mini-mobs to slice at every discernable difference. We're all in HN together, and that has a striking effect on the style of discourse here in comparison.

(As an aside, I have zero problem with using social media for communication with actual friends, relatives, and activities you're a part of. It's all the extra crap they shovel on to chase that unbounded revenue growth that ultimately feeds the problem.)

Regarding the group manipulation, individuals are always at least legally held to their own actions even under manipulative circumstances (distinguished from coerced ones).

Group manipulation only lasts for so long as people don't recognize what's going on and how it's negatively affecting them, which is much harder to keep under wraps these days. There are movements against using Facebook now, which is the proper response to seeing how manipulative and literally unhealthy its ecosystem has become; whereas I don't consider it a reasonable response to call for wide censorship and scanning of personal information and "private" exchanges that happen there, in a big ol' ball of establishing precedent. If manipulators broke laws by posting or accessing stuff, it's a matter of jurisdiction as to who penalizes them, which is always a problem online, but that legal process seems to be properly progressing against CA.


I'd suggest our frames of reference are too far apart.

Your statements indicate that you place the entire onus of responsibility on end users. I hold some fault toward users, but include fault in the providers.

Generally, the world isn't black and white.

Why do you believe users deserve all of the responsibility? What about in countries where the entirety of information in and out is filtered?

This style of propoganda may be a type of coerced information.

These are also private entities. They can legally and morally censor without recourse.

All in all, I find the triviliation of those possibilities distrubing.


As I said before, propaganda always has and will exist. Certainly measures can be taken to combat it, but some nonzero amount will be there and people first and foremost should expect to face it.

You're injecting an opposite extreme, implying that I don't think propaganda should be combated at all. But the first line of combat is broader communication, so usurpation for propaganda isn't the only content that flows, and multiple points of view are freely shared. With respect to free speech, legal action against propagandists should generally be reserved for origination of falsehoods and incitement to violence, which tends to already exist in our legal frameworks.


Not quite. No one is asking for personal content to be judged.

But if there is a one-to-many pattern of information sharing (e.g. an organized attempt to share content) to unrelated parties, then that content should be vetted for veracity. Either FB does it, or they surface it for others to flag it.


> But if there is a one-to-many pattern of information sharing (e.g. an organized attempt to share content) to unrelated parties, then that content should be vetted for veracity. Either FB does it, or they surface it for others to flag it.

This is extremely toxic for freedom of speech and information sharing and is a veiled gateway to censorship. It subjects veracity to the approval of an unknown group with unknown motives than you're assuming are righteous.

There are notably verifiable "fake news" stories that we can all logically assume are fake, but the matter of veracity is ultimately defined in opinion. Even if that opinion is your opinion on what news source to trust.

The only people that can truly verify information are those that are present when it's generated and we rely on those people to be truthful. You trust your source of news when you determine something is fact. Bob's Blog reporting "Obama sends troops to Texas to ensure 3rd term" will receive a notably different reception than NPR reporting the same thing. But ultimately that difference in perception is your opinion of what constitutes valid information. Unless you're in Texas, do you really know for sure?

If an article is 90% correct, but 10% unknown or possibly speculative, who makes the call on the validity of the news and story? What if that 10% of the information dramatically affects the context of the other 90%?

If a police investigation determines that a police officer lawfully killed someone in the line of duty, but someone disputes that with another story, who do we trust? Should one of the stories be suppressed because it can't be validated? Personally, I prefer to reserve the decision for what I consider valid for myself, not the overlords of the platform I use to consume the content.


Why? Why do you buy into this meme of tyranny of censorship and dictated truthiness?

Progress cannot happen in society without good & bad ideas freely propagating, and individual decisions to buy into them or not. Every significant cultural movement stems from a counter-cultural uprising, and these sorts of things would be swept up in the "vetting".


Certainly. If it's an opinion, then that's fine. This issue is different - it's about manipulation, the willful presentation of known fake information as real news. That is not allowed in the print media by law (libel/slander) - so why should it be allowed on digital platforms?


Because Facebook is theoretically not a news media channel, with hired content creation promoting something as factual reporting. Certainly if an individual posts libelous content, they should be legally responsible for that content. But it's ostensibly not Facebook's content, as a social media provider, and international users make legal enforcement near impossible. Propaganda has always existed in opinion pieces, reporting on "rumors" to disclaim themselves, flyers outside official media channels, etc. Facebook doesn't affect any of that whether it's social media or becomes news media.

Again, it's 4chan with more rules. Anybody can post anything. It's not a place of fact or truth. It's of people sharing their lives, thoughts, hobbies, opinions, notices, likes & dislikes, etc. Some extremist $SIDE-wing Facebook channel is simply posting such things. It's not an official channel for trustworthy news, it's users being social, whether that user is Aunt Flo or CNN.

The core problem is that people think it's a "trustworthy" information platform (and Zuckerberg wants that trust for more customer buy-in). It's not. And it won't be, unless you remove the core personal family & friends social aspect of it.


Note that eliminating the ability to do detailed profiling and precise targeting does NOT stop this kind of manipulation. It just makes it more expensive.

Those people who will respond to your fake news in a way favorable to you are still out there. It just means that instead of being able to target a group of say, 10 000 people that will give you a 5% conversion rate, you might have to pay to target 1 000 000 people with a 0.05% conversion rate.

This raises an interesting question: would we be better off if instead of restricting data we make it more widely available?

As suggested in the second paragraph above, restricting the data doesn't stop those with enough money from influencing susceptible people at scale. It just makes it more expensive, so only the very wealthy can do it.

If we make the data more open, we make the playing field more level, perhaps giving smaller, less well financed groups a chance to compete with the billionaire backed groups and causes.


The idea is to discourage people from using propaganda to manipulate democracy. I honestly don't think that throwing open the floodgates so that effectively anyone can would fix the problem at all. Realistically all you're going to do there is make it cheaper and easier for existing players and worse for everyone subjected to it.


One solution to OP's problem is to strengthen libel and slander laws to deter people from casually publishing fraudulent news, akin to a DMCA takedown request but perhaps a bit more bite. In this case I would distinguish publishing from 'sharing' with a bit more legal finesse.

The problem is the press will scream bloody murder at any attempt to reign in their right to publish, perhaps rightfully so. IANAL.


How enforcable would that be outside US borders, realistically?


Good point


Whilst I appreciate the concern, so long as the US government has platforms such as the EC-130 Compass Call ( which can overpower broadcast frequencies with its own propaganda ) in order to 'shape' democracy in other nations it's quite hard for people outside the USA to think anything other than 'a taste of your own medicine'.

The only thing that seems different is that it was the 'other guys' who were playing psyops better this time around, rather than the incumbent government.


Also what about Hillary Clinton's emails while we're at it. That is the real scandal. The other issue should be ignored because the world isn't perfect.


Quite true and an interesting proposition. Especially with regard to transparency - if we all can see the data FB sees on what ads were targeted to whom - then that might help as well, as outsiders can watch for dark patterns. In addition, I think they need to use AI to verify if posts and ads can be fake before allowing it to go live.

Of course, doing so would mean fewer clicks. The more extreme the info, the greater the clicks.

This is at the heart of Zuck’s dilemma - to curb this problem in a meaningful way means reduced revenue.


What's interesting is there have been more than the usual number of downvotes for this thread. I wonder if using IP info admins can determine if these downvotes are clustering from within an org?


Yes, I've noticed this on other HN posts about Facebook. Lots of downvotes without comments.

Detecting and preventing brigading needs to be more sophisticated than just looking at IP addresses.

Need to determine if groups of users are acting in concert across multiple posts and comments, regardless of IP.


What if a group of users believes and behaves similarly, without coördinating amongst themselves?

Should that be prevented? It's not really distinguishable from the scenario you prevented.


Like-minded people acting independently have a very different signature from organized campaigns of brigades and sock puppets.

Independent actors are far less correlated than centrally orchestrated groups.


Having seen many discussions about whether some group is a brigade of sock puppets or not, the signature of like-minded people acting independently is that they hold views that the person judging them agrees with, whereas anyone who holds views they dislike is clearly a brigade of sock puppets. This seems to hold across the board, and judging based on how "correlated" they are doesn't change this because everyone is biased towards thinking of people who're like them as more diverse in viewpoint and outsiders as all alike.


These would reduce the scale with which it would be able to operate. Someone would have to interact with the app directly, as opposed to their friends' friends' friend.


Can you point to the particular new limitation that will have this effect? I can't find anything in the announced changes that do this. The key attraction of FB - viral marketing for an app - is something that FB wouldn't touch I think.


The friends-of-friends change happened in 2014. Cambridge Analytica's method stopped working back then as apps could no longer acquire permission to query friends of friends.


Someone can still write an app to distribute on FB, and can use its platform's ability to foster virality to make it take off. That's a core feature of FB. The change you're talking about, if indeed it was made, may reduce the speed of vitality a tad, but wouldn't really affect the ability for an app to spread virally.


Can you cite any sources? What I've heard was that virality for apps drove off a cliff in 2014.


Nice example. Do you have a screenshot of what the facebook feed of those prospects would look like (with ads) ?


"It will also prevent apps from using Facebook Login to collect users' personal information, including details like their religious or political views, relationship status, education and work history, and more.

...

In an alarming revelation, he said that recent investigations into data privacy have revealed malicious actors cycling through hundreds of thousands of IP addresses in order to search for users by their phone numbers and scrape their public profile information.

Until now, users have had to opt out of making their profiles searchable by phone number. Most, Zuckerberg said, never opted out.

Though the CEO accepted blame for all of these data privacy and trust issues, saying, "It was my mistake," he also often put the onus on Facebook users to know better.

He mentioned, for instance, that the only information that bad actors would be able to scrape using a phone number was information that was public on Facebook user profiles.

Of the researcher who built the data-scraping app for Cambridge Analytica, Zuckerberg said, "Yes, he broke the policy, he broke people's expectations, but also, people chose to share that data with him."

And yet it was Zuckerberg and the company he built that made people's data privacy settings so open by default, and made it difficult to find, understand, and adjust those settings."

[ Poll: Is "open by default" congruent with "privacy by design"? ]

Source:

https://www.wired.com/story/facebook-exposed-87-million-user...


Right. Exactly.

And part of the propaganda, even today, is the US right trying to equate that somehow to Obama. They are relying on wan, sad whataboutism: "When Obama used social media it was ok", "If Hillary had done this it would be ok".

No. The Trump campaign spent $100M on facebook ads. If they are so proud of those ads, let's see them. Release them. Let's see what they were telling people, and who they were targeting.

For normal political advertising on TV or even direct mail, it's hard to keep what you're doing a secret. For online targeted advertising, there is no such constraint.

We need to figure out a way to make all political advertising, and who it's targeted to, publicly disclosed. Sunshine helps.


Hmmm. Do you expect to find something very outrageous in those ads? If that would be the case, I think the press would have informed us already. It's not like they are very indulgent towards Teh Donald.

And it's hard to spend $100M on facebook ads with some outrageous content without anyone noticing. Screenshots will be posted, shared, etc.

So, I don't expect anything that wasn't said by Trump himself during his stump speeches and pre-election rallies. "Build Teh Wall", et al.


FB never revealed the content of these ads, so no one other than FB knows.


Then they fixed a "bug" and removed content from their platform:

https://www.usnews.com/news/politics/articles/2017-10-13/fac...


I've had facial recognition off forever on FB and every time I've logged in in the past few weeks the following is one of the first entries in my feed:

https://imgur.com/a/pMhoL

It's a nag to turn on facial recognition. Feels like really bad form to be asking for such intrusive extra info with what they're going through right now.


I got that every time I used the site for years. It wasn't until I deleted my profile and created a new one where I never opted in to the recognition in the first place (or allowed any tags of myself for it to build the face profile, perhaps) that I was able to make it go away.


Random thought: when you created the new account you accepted their right to do this in the ToS and so they didn't need to explicitly ask?


Perhaps, but I believe I recall opting out of it. Also as I said, I never gave them any seed data under the new account to start with. My original account had plenty of old pictures that people had tagged me in (back when you had to do that manually!), and at some point I went through and deleted all those tags though I'm quite certain they still were cataloged.


This is very likely the case.


Facebook also gives me a fake notification on messenger every time I use the app or website. Sorry Facebook I don't want to use your incredibly subpar message app.


They show the fake message notification on the mobile website (and I have my laptop right in front of me with no new messages in days) only to redirect to the store and download their messenger app. This is one of the darkest patterns I have seen outside of obviously scammy websites.


I actually have the app installed and it still shows me a notification...I went so far as to delete all conversations and I still get that notification. I've contacted Facebook regarding the issue, they haven't bothered to respond. It's absolutely a dark pattern that makes me use Facebook less.

But it's probably a good thing considering Facebook also apparently shares the content of those conversations


It’s already on. They’re just asking if you want to see it.


The actual article title is "An Update on Our Plans to Restrict Data Access on Facebook". It details a list of new API restrictions for Facebook Apps, as well as planned notifications of users whose data was leaked to CA.

These additional API restrictions may be closing the door after the horses have bolted, but they will also restrict more scraping and data mining. However, I'm sure the value of Facebook data that companies have already collected just shot up significantly...

I also think that the step of planting an alert on affected users' News Feeds is a good one, and something that I didn't expect Facebook would go for. Curious to see what the report says when that feature goes live.


Facebook: "We have no fucking clue how much data they took from us. We're not a company that specializes in data."

(shamelessly stolen from reddit)


The exact number requires to do operations that are non-trivial at scale: you have to check who accepted to share information with that version of the app (there were several) and take into account the accounts that have been deleted in the last four years; whether Facebook can access that particular information at all is unclear to me (I would believe not). Then you have to recreate a list of all their friends at the time (which is, again, not trivial with some friendships being deleted) and take care of duplicates -- doable, but not trivial at that scale. Whether people are American is also non-trivial: I’m not sure there was clear geographic entities to make that easy in 2013. People move and Facebook only has your location, not your passport. You then have people without a recognisable name, who forgot or who are too young to be in the voting registry. Do you count all the IDs, those who were still active in November 2016, those who match local registries?

People who specialise in data (like people who code professionally) are precisely those who would have all those questions on the top of their mind and know that old, graph-based, editable datasets that need to be matched with another dataset are bad.

Edit: I was a data scientist for Facebook and I can personally attest that most of those are genuinely hard, especially those that I intentionally overlooked.


I wish I could just pay a monthly fee to access Facebook but to be protected from advertising and from being the product.

I also wish I could client-side encrypt all of my content, share the keys with my friends who I want to have the ability to view my content, and somehow have this all be friction-less and transparent from my and contacts' perspective.


Even when you're the customer, you'll still be a product. They'll sell you just like credit card companies have for decades.

Companies are relentlessly profit-optimizing entities, and they cannot forego the extra revenue stream. Employees with ethics are like tissue holding back a tidal wave.

The only way to avoid having them exploit the data is to deny them the data in the first place.


There are some larger businesses that do right by their customers and employees, because the leadership has a strong sense ethics and of responsibility for the ethics of their whole company.


Can you give examples of such businesses, I'm genuinely curious.


IMO Apple doesn't data mine like other large tech companies. Especially since they shut down iAds.


That might be true now. What happens should Apple go bankrupt? Will those who buy the assets uphold the culture?


> Companies are relentlessly profit-optimizing entities This isn't some mysterious or predetermined force. Companies are just made of people.

> and they cannot forego the extra revenue stream. Yes. They CAN. And if the incentives were in the right place they would. For instance, Coke would probably sell more fizzy sugar water if it still used cocaine as an ingredient, but it doesn't, because it's illegal.


We need legal action on this. Reselling data needs explicit user permission with the user being able to decline. This would be very burdensome but it seems to be the only way.


The users who would be most willing to pay for such a change on Facebook are probably also the most lucrative demographic for advertisers.


> I wish I could just pay a monthly fee to access Facebook but to be protected from advertising and from being the product.

Honestly, they'd take your money and still treat you as a product and sell you. Their entire corporate culture is built around users being the product, and small cash payments won't change that.

Nothing short of firing the entirety of Facebook's leadership and a good fraction of its other employees will change how it views its users.


> Their entire corporate culture is built around users being the product, and small cash payments won't change that.

You make that sound like a fact...can you please provide sources!

I am asking this because it doesn't align with my personal assessment of people who I know that work at Facebook or Google...like not at all!


https://www.nytimes.com/2018/04/03/opinion/facebook-fix-repl...:

> Every business has its founding DNA. Real corporate change is rare, especially when the same leaders remain in charge. In Facebook’s case, we are not speaking of a few missteps here and there, the misbehavior of a few aberrant employees. The problems are central and structural, the predicted consequences of its business model. From the day it first sought revenue, Facebook prioritized growth over any other possible goal, maximizing the harvest of data and human attention. Its promises to investors have demanded an ever-improving ability to spy on and manipulate large populations of people. Facebook, at its core, is a surveillance machine, and to expect that to change is misplaced optimism.

This observation also would also likely apply to a hypothetical Facebook that offered user subscriptions:

https://talkingpointsmemo.com/edblog/a-serf-on-googles-farm:

> One thing I’ve observed with Google over the years is that it is institutionally so used to its ‘customers’ actually being its products that when it gets into businesses where it actually has customers it really has little sense of how to deal with them.

Google's "customer service" is automated and unstaffed. When they're used to doing it that way, why reduce margins to staff a call center for the paying customers? Treat the paying customers and ad-watching users alike. Likewise, when you've built a mechanism to monetize user data, and you're used to running your business off it, why shut it all off for the few that pay you? It's "leaving money on the table." Keep it on, but maybe tone it down a little, and make even more money. To resist these temptations requires a strong culture that Facebook obviously does not have.


Mastodon can help you with the first part - you own the server and all the data on it. AFAIK there's no encryption of data at rest so you'd need to add that, but if you own the server it's less of a concern.


never going to happen, unless users organize some kind of collective action. Imagine how quickly Facebook would start to make changes if a bulk of users deactivated accounts in protest, similar to a strike [1].

Easy to do in theory, hard to do in practice.

[1] https://medium.com/@oddbert2000/call-for-a-facebook-users-st...


It will not generate enough revenue compared to ad revenue.


Secure Scuttlebutt. The problem is to get people on it.


What a terrible name. :/


"Scuttlebutt" is nautical term for a water fountain (which was originally a cask, not a fountain). Since sailors would congregate around the scuttlebutt and gossip, "scuttlebutt" also came to mean gossip as well. I guess I'd call it the nautical version of "water cooler talk."

It's rather obscure term to use outside a nautical context, so I agree it's a bad name if targeting the general public. It would be a fine name if its target audience was only sailors.


That makes a lot of sense, when explained. I was thinking more from a branding perspective. :P

"Hey non-technical friend, instead of Facebook you should use Secure Scuttlebutt; it's so much better!"


Oh, I agree it's a terrible name for a product whose audience is the general public! I was just explaining the name.


Gossip has negative connotations to start with. If people are gossiping about it's not something people most people consider a compliment.


The problem is the name. Names matter.


They can start with asking the users about it.


Doesn't beat Equifax who impacted 143 million Americans, jeopardizing consumer Social Security numbers, birth dates, addresses and some driver’s license numbers. Just another day with no consequences for exposing what I'd label as highly personal information.

FB can limit what it wants but someone will eventually find the means to buying the necessary data today and build what CA did going forward. IMO the 2020 election campaign costs will be the billion $ candidate that competes, so just buy the data you need.


Yes, the Equifax breach is orders of magnitude more outrageous than this Facebook business.


I think we need a legal framework around sharing data. Even if Facebook did everything right some other companies will mess things up. In the end a person should always be notified when his/her information gets shared with another party and have the ability to approve. This will make the business model of many companies difficult but so be it. I don't see any other way.


Nobody attacks the data collection itself. Anyone who accepts this is a fool for letting Facebook continue to pull the wool over your eyes. This human centipede of data collection needs to die.


I accept it. Facebook is probably one of the better companies out there when it comes to security. They haven't had a true breach; the scandals so far have been small, mostly because the mores of society have shifted before FB can roll out changes to match them (and they always do).


They haven't had security 'breaches' because nobody ever called it that, but FB has been leaking data out the back door for years.

The amount of info that you used to be able to pull from Facebook's API was incredible, and most people didn't realize it. Even information as bland as friends and friends-of-friends is enough to build a useful social graph around a person. (Years ago I did just this, and it was amazing how the graph clustered all my different social groups)


"Until today, people could enter another person’s phone number or email address into Facebook search to help find them. ... malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way."

Hm. This seems like an interesting tidbit, I would love to know more. It seems to imply that many profiles have already been scraped in this way. A phone number is a really strong cross-domain identifier as we use it across a bunch of different online services. Collate your Facebook scrape with a couple data brokers and you've got a real strong profile of someone.


I'm more concerned about fundamental privacy issues, such as so-called "data onboarding," in which Facebook (and others) helps marketers connect your online identity with your offline identity. That's where we need more transparency -- and perhaps more regulation.


Facebook can't seem to behave in a trustworthy manner. Initial findings about the improper sharing happened in 2015. How many years does the "move fast, break things" company need to accurately assess damage, or does it NOT want to accurately assess at all.


It's not your fault that you missed the reporting on it, but FB actually DID respond in 2015. They closed the friends-posts permission way back in 2015. They had plans to do so even before the story broke. What C.A. did hasn't been possible since 2015. They responded appropriately at the time. Of course judging by the anti-FB media frenzy you wouldn't know it.


Yes they reacted, but only after they got called out on it [1].

Before that, for years, they knew exactly what they were doing when they gave apps access to their users' friends. And it must've been obvious to them throughout how such abuse could be executed. Yet they didn't try very hard at all to prevent it.

[1] https://www.theguardian.com/commentisfree/2018/mar/26/facebo...


> Facebook can't seem to behave in a trustworthy manner.

My goodness, this is the same issue, not a new one. Facebook is behaving in a trustworthy manner right now, over-communicating the scope and details of this issue, yet here we are to attack them anew over it.


It sounds more more like they’ve kept the number disclosed for as long as they could get away with it.


Where is the trustworthiness in this that you are seeing?

They are just telling people what they think they want to hear given how they have been caught out.


> They are just telling people what they think they want to hear given how they have been caught out.

Wait a second, let's back up. They didn't get "caught" doing anything. They purposely, intentionally, and with everyone's explicit knowledge shared friend list data with an app developer.

This is just a basic fact. No one, not even my mother, can realistically claim they did not know and understand Facebook was sharing graph data with app developers in 2012, or 2014. Not only was it crystal clear when you installed an app, even if you didn't, anyone that was on Facebook was inundated by messages from Farmville, and many other apps, letting them know what their friend's were doing.

Later, by 2014, Facebook decided they needed to be more restrictive with this data. They shutdown app developers access to the social graph, essentially killing Facebook Platform, which everyone expected to be a primary driver of future revenues. They shutdown Graph Search, an extremely useful tool, because it made it too easy to collect personal data.

But we need to be clear that Facebook was not "caught" doing anything at all. They did exactly what they said they would do, which was plain to everyone, even my technophobic mom.

Separately, in 2014, an app developer shared personal data with Cambridge Analytica. Facebook contacted both parties and requested that they certify they deleted the data, which they did.

The only reason people are upset now, is because:

a) politics is involved, and b) they are retroactively applying current best practices with personal data, which were not common in 2014 and before.

The incredible part about all of this is that so many other social networks (and other companies) continue to collect the exact same data, and many of them share it publicly. Almost all Twitter users have their friend list open to the public, for all to see, along with all of their tweets, because that's what the platform encourages. No one would say Twitter has been "caught" doing this.

In fact, Facebook has been extremely up-front about the situation. They fixed the situation 4 years before it came to light. They have announced important and strong changes to further protect data in the future. They have publicly and and widely announced their detailed findings in this case, and they have promised investigations of similar unauthorized usages of personal data that may have occurred with other app developers.

I mean, what more do you really want them to do?


To stop existing.

They are a net-negative for society.


"over-communicating" is good, but most of this info seems like it should have been revealed earlier, no? It's obvious that they knew.


"Move fast and break things" explicitly lionizes a disregard for externalities, so it's no surprise that assessing the damage is not in the company's DNA


A fish rots from the head down


The optimist in me says these changes have been a long time coming. The skeptic in me says FB is doing this to avoid being regulated.

Anybody with more insight care to comment?


Well just take a look at the techcrunch article published today on how Facebook is not committing to GDPR standards for North American users. https://techcrunch.com/2018/04/04/facebook-gdpr-wont-be-univ...

It basically says they are going to do the minimal effort required to protect privacy while avoiding regulation.


The TC article is unfortunately misleading.

As I posted elsewhere already. GDRP is more then just some rules around what you need to ask users for permission for. It's much bigger and very specific to the EU legal apparatus.

It hence makes no sense to ship globally. What you want is for the underlying privacy controls to be available for everyone...which according to Reuters is what FB is doing!


Remember when myspace became my`____` in an attempt to rebrand itself? That amount of cringe, and no regulations.


Anyone have an archive? I blocked them in my hosts file.


Two weeks ago we promised to take a hard look at the information apps can use when you connect them to Facebook as well as other data practices. Today, we want to update you on the changes we’re making to better protect your Facebook information. We expect to make more changes over the coming months — and will keep you updated on our progress. Here are the details of the nine most important changes we are making.

Events API: Until today, people could grant an app permission to get information about events they host or attend, including private events. This made it easy to add Facebook Events to calendar, ticketing or other apps. But Facebook Events have information about other people’s attendance as well as posts on the event wall, so it’s important that we ensure apps use their access appropriately. Starting today, apps using the API will no longer be able to access the guest list or posts on the event wall. And in the future, only apps we approve that agree to strict requirements will be allowed to use the Events API.

Groups API: Currently apps need the permission of a group admin or member to access group content for closed groups, and the permission of an admin for secret groups. These apps help admins do things like easily post and respond to content in their groups. However, there is information about people and conversations in groups that we want to make sure is better protected. Going forward, all third-party apps using the Groups API will need approval from Facebook and an admin to ensure they benefit the group. Apps will no longer be able to access the member list of a group. And we’re also removing personal information, such as names and profile photos, attached to posts or comments that approved apps can access.

Pages API: Until today, any app could use the Pages API to read posts or comments from any Page. This let developers create tools for Page owners to help them do things like schedule posts and reply to comments or messages. But it also let apps access more data than necessary. We want to make sure Page information is only available to apps providing useful services to our community. So starting today, all future access to the Pages API will need to be approved by Facebook.

Facebook Login: Two weeks ago we announced important changes to Facebook Login. Starting today, Facebook will need to approve all apps that request access to information such as check-ins, likes, photos, posts, videos, events and groups. We started approving these permissions in 2014, but now we’re tightening our review process — requiring these apps to agree to strict requirements before they can access this data. We will also no longer allow apps to ask for access to personal information such as religious or political views, relationship status and details, custom friends lists, education and work history, fitness activity, book reading activity, music listening activity, news reading, video watch activity, and games activity. In the next week, we will remove a developer’s ability to request data people shared with them if it appears they have not used the app in the last 3 months.

Instagram Platform API: We’re making the recently announced deprecation of the Instagram Platform API effective today. You can find more information here.

Search and Account Recovery: Until today, people could enter another person’s phone number or email address into Facebook search to help find them. This has been especially useful for finding your friends in languages which take more effort to type out a full name, or where many people have the same name. In Bangladesh, for example, this feature makes up 7% of all searches. However, malicious actors have also abused these features to scrape public profile information by submitting phone numbers or email addresses they already have through search and account recovery. Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way. So we have now disabled this feature. We’re also making changes to account recovery to reduce the risk of scraping as well.

Call and Text History: Call and text history is part of an opt-in feature for people using Messenger or Facebook Lite on Android. This means we can surface the people you most frequently connect with at the top of your contact list. We’ve reviewed this feature to confirm that Facebook does not collect the content of messages — and will delete all logs older than one year. In the future, the client will only upload to our servers the information needed to offer this feature — not broader data such as the time of calls.

Data Providers and Partner Categories: Last week we announced our plans to shut down Partner Categories, a product that lets third-party data providers offer their targeting directly on Facebook.

App Controls: Finally, starting on Monday, April 9, we’ll show people a link at the top of their News Feed so they can see what apps they use — and the information they have shared with those apps. People will also be able to remove apps that they no longer want. As part of this process we will also tell people if their information may have been improperly shared with Cambridge Analytica.

In total, we believe the Facebook information of up to 87 million people — mostly in the US — may have been improperly shared with Cambridge Analytica.

Overall, we believe these changes will better protect people’s information while still enabling developers to create useful experiences. We know we have more work to do — and we’ll keep you updated as we make more changes. You can find more details on the platform changes in our Facebook Developer Blog.


Me too :)


This is reminiscent of the yahoo breach where they said "some" and eventually upgraded the warning to "all of them"


Are there any companies that rely on Facebook data that are going to face a rough patch (or cease to exist) because of the API changes?


One site, your question made me remember was a third party app for events, Heyevent.

http://heyevent.com/

Out of curiosity I just re-authorised them for access and was displayed the following message:

"We're sad to announce that due to dwindling traffic, expensive hosting costs, and new limitations of the Facebook API, we've decided to close down Heyevent. We're sad to have to do this, but we unfortunatelu we see no other option. Since the launch of Heyevent, Facebook themselves has added more event recommendation. They're not as good as Heyevent's recommendations, yet. Hopefully they'll get better. Thanks for using the service!"

I found it somewhat useful in the past to keep using facebook to a minimum, so I wish them the best.

There must be some other areas of data based servicing that experience difficulties that but I can't think of any other right now.


I find it ironic that they plan to let users know they've been compromised via the newsfeed. What about users who have either deleted or never log in? How will they find out? Is their likely defense 'ignorance is no excuse' really a great defense if the only way to avoid ignorance in this case to log into their platform?


I'm not sure what you mean. Is there any particular action to be taken? It's not like you need to change your password or credit card.


It is not in the newsfeed... it is the News Room. I didn't login and I was able to read this.


Sorry, was reading multiple articles on this topic at once. Yes this is a newsroom post, however they plan to let individual users know via their newsfeeds: https://www.bloomberg.com/news/articles/2018-04-04/facebook-... "Facebook says it will tell people, in a notice at the top of their news feeds starting April 9, if their information may have been improperly shared with Cambridge Analytica."


No, you were right, they mentioned the News Feed in this post too.


> App Controls: Finally, starting on Monday, April 9, we’ll show people a link at the top of their News Feed so they can see what apps they use — and the information they have shared with those apps. People will also be able to remove apps that they no longer want. As part of this process we will also tell people if their information may have been improperly shared with Cambridge Analytica.


Thanks Facebook, you will tell me the information you have already leaked to anyone that wanted it.


“Events API: Until today, people could grant an app permission to get information about events they host or attend, including private events. This made it easy to add Facebook Events to calendar, ticketing or other apps. But Facebook Events have information about other people’s attendance as well as posts on the event wall, so it’s important that we ensure apps use their access appropriately. Starting today, apps using the API will no longer be able to access the guest list or posts on the event wall. And in the future, only apps we approve that agree to strict requirements will be allowed to use the Events API.”

Does this mean we won’t be able to show FB events and rsvps in our app?

https://qbix.com/calendar


Facebook can't seem to behave in a trustworthy manner. Initial findings about the improper sharing happened in 2015. How many years does the "move fast, break things" company need to accurately assess damage, or does it NOT want to accurately assess at all.


Facebook's problem is that it has no incentive to fix these issues, which it sees as "features" for the most part.

That's why it always seems to improve its privacy policy only as a result of regulations and scandals. Even then, it only does the minimum necessary that it believes will appease everyone for the time being and until the next scandal.


Facebook is still moving fast and breaking things, just not their things. The morality of an organisation reflects that of its senior management. Zuckerberg doesn't care about breaching privacy so why should Facebook? He only cares that he was caught.


Anyone else find the font being used on that page incredibly hard to read?


Yes.


"improperly shared"


How much was properly shared?


What was shared would be an additional question. Edit: Or maybe what was not shared would be easier to answer.


It seems that no one is addressing the problem of segmentation and microtargeting of ads, that’s key. how the distorted message is delivered. we will not be able to build as easily psycograph models (but data will be aquired in other ways) but is every day easier to target super precisely with ads...


Assuming that they have the info on the accounts to make that number estimate, are they going to contact those who were affected?

I mean to say, if- for example- my mom was one of the facebook users who had their information taken/used by CA, what should she expect?


Yes, they previously claimed that they will be reaching out to people affected by the Cambridge Analytica scandal.


Since #deleteFacebook started to gather momentum the active users number rose by about 30% in The Federation https://the-federation.info/


What is hilarious is they still hide privacy controls from people esp by limiting access from desktop. They're awful and people will be slow, but leave in droves, those who don't find a lot of use for it.


There's a sort of running gag at Google that "An Update On Foo" is how the company announces that they're canceling Foo. So my first read of this headline was.. different.


The most important part:

> In total, we believe the Facebook information of up to 87 million people — mostly in the US — may have been improperly shared with Cambridge Analytica.


I am admin for a 'closed' facebook group. How can I find out which apps have accessed my group without admin permission?


What'll the number be up to by the time Zuckerberg is in front of Congress?


"We're really going to restrict it this time - pinky swear!"


Were all those people democrats?


Facebook, do no evil.


Define, "improperly".


They mean in violation of their terms.


What's not being said is that the Russians likely had the data too for use in the 2016 elections.

See https://www.theguardian.com/uk-news/2018/mar/21/facebook-row... and the testimony Christopher Wylie gave before the UK House of Commons Select Committee. He's the CA whistleblower. It's a very illuminating watch - https://www.youtube.com/watch?v=X5g6IJm7YJQ

So to recap:

[1] Kogan writes a FB app for CA to create psychographic profiles of users. This data allows people to be targeted by how naturally susceptible you are to rumors and fake news.

[2] Kogan then accepts a position at St. Petersburg State University (so essentially Russian money) and moves there

[3] Russians subsequently magically gain a new superpower they didn't have before - the ability to target users on FB who are susceptible to manipulation towards extreme viewpoints.


The interesting question, whose answer will likely have to be forced through some leak or investigative reporting, is whether FB and the execs knew about the potential connection to Russia in 2016. If it was brought to the attention of Zuckerberg and Sandberg, in light of their adamant denials back in 2016 and 2017, I don't see how the company or executives walk out of this unscathed.


Yes, if they knew about it and didn't do anything, then that's willful negligence.

They are not helping their image - the latest is a rejection of GDPR for non-European customers. Seriously? Way to demonstrate commitment to user data protection FB.

This level of arrogance is a precursor to strict regulations. They are practically asking for it at this point.


Weaponized idiocy.


"Facebook, in fact, claims lofty goals, saying it seeks to "bring us closer together" and "build a global community."

Those are indeed noble purposes that social media can serve. But if they were Facebook's true goals, we would not be here.

The ideal competitor and successor to Facebook would be a platform that actually puts such goals first.

To do so, however, it cannot be just another data-hoarder, like Google Plus.

If we have learned anything over the last decade, it is that advertising and data-collection models are incompatible with a trustworthy social media network.

...

When a company fails, as Facebook has, it is natural for the government to demand that it fix itself or face regulation. ...

If today's privacy scandals lead us merely to install Facebook as a regulated monopolist, insulated from competition, we will have failed completely.

The world does not need an established church of social media."

Source:

Tim Wu, law professor at Columbia University

https://www.nytimes.com/2018/04/03/opinion/facebook-fix-repl...


I love Tim Wu and God knows I relied on his work for my own PhD (on precisely that topic: Facebook monopolistic dominance) but his essay betrays exactly the contradiction:

- He starts by attacking Facebook to pursue growth

> Facebook prioritized growth over any other possible goal

- but, when offering a solution, he wants exactly the same thing for its proposed alternative:

> the real challenge is gaining a critical mass of users.

I’m not sure changing the name but not the presumed initial values will help.

Bu then he writes:

> for which users would pay a small fee

Anyone old enough to remember the early days of Facebook will remember how Mark Z. had to defend against that idea: being locked-in a paid service was the worst that could happen. Tim Wu knows that. Actually, everyone who is not on Facebook should know that too because that rumour became such a problem for the company that it ended up occupying the most previous location on the service: the login page.

> It's free and always will be.


It's easy to sit on the sidelines and criticise Facebook's business model.

The hard part is coming up with a business model that doesn't rely on advertising and is actually going to get any traction. Especially since people by and large like the model. If Facebook can simply lock down their APIs and handle state/nefarious actors better I think you will find the public moving past the current situation.


Does it have to be a business model? Couldn't we search for ways to support the costs of federated.distributed platforms, as used to be the case with NNTP?


You're absolutely right! We could search for funding models, plural, to support a network of federated and distributed platforms! We could even couple it with ways we know that enable distributed and federated platforms to interoperate and keep at bay the problem that come from that.

Of course, it's possible that this may be a sufficiently non-trivial problem where the best answer anyone's found to date is a centralized business. But hey, we won't know until we try! Also, email as a federated system and its history doesn't count, because that runs directly against the basic thesis that nobody has seriously tried.


The problem is that any company that starts with non-advertising goals can and probably will be acquired if they start to take off. The acquirer will most likely be ad based.

The other problem is that social media isn’t novel or interesting in 2018 so I don’t see people rushing to a new platform that replicates the same old functionality.


Messenger was expected to come up with non-advertising based business models. The most credible rumour was to sell to large companies the ability to communicate with their customers and prospects over Messenger, a trusted and spam-free platform (email appears ineffective because of poorly targeted marketing). Having payments was a key step in that direction: you could not just change your flight over Messenger, but pay the extra to do so.

Workplace (née Facebook at Work) is now considering payment options, too. That could actually bring quite a bit, especially if inter-company communications are worked out.

The reason that ads remain on Facebook is that the senior team believes that they can make the ads genuinely improve the quality of the experience. I personally block a lot of ads on Facebook (about 80-90% of what I see) and actively look at my Ad preferences, and I get genuinely useful ads -- often new offers from competing business, which allow me to monitor them. I can imagine why many people don’t see it that way but I would love to know where are the limits of that model. I would more generally encourage people to treat targeting (on Facebook and elsewhere) as something that they need to be pro-active about, and trust at least some platforms to use that information to improve their experience. That way, we could learn more about how to connect brands and customers better.


I personally would never pay Facebook for anything or pay for any sort of social media. It's just barely worth the time I spend ingesting it and nothing more to me, certainly not any money.

I'm also not super bothered by ads and fully aware that the data I give consumer internet companies will be used as they (or their future acquirers) see fit.

I don't agree with the sentiment that if we could only switch business models from advertising to subscription then all of our data will remain private. Apple's business model is not based on ads and I trust them as little as I trust Google. I know that's not a popular opinion here but I think it's prudent. Once I push information from myself to a company, I have no confidence that that information will be used as advertised.


"Morgan Stanley cut its price target on Facebook shares to $200 from $230 on Wednesday, citing concerns about the social media company's ad sales because of its data scandal.

...

"While we think FB's high advertising performance speaks to the value users get out of the ads served, general consumer dislike towards advertising and increased data scrutiny could cause more users to opt-out of sharing data with FB," he said."

Source:

https://www.cnbc.com/2018/04/04/morgan-stanley-lowers-facebo...


"Facebook is asking users whether they think it's "good for the world" in a poll sent to an unspecified number of people."

Source:

https://www.popularmechanics.com/technology/apps/a19671683/f...


"In that way and depending on how your personal data is manipulated, Facebook Login could almost fall under the category of a dark pattern - a method for websites or apps to get you to give up more information than is required by playing on assumptions.

...

But it's fair to say that Facebook should be less trusted today than it was nine years ago.

...

Just like we should all be doing our part to detangle our lives from Facebook's web, app developers owe it to users to divest in their reliance on Facebook Login.

...

It's one thing to offer Facebook Login as an alternative way to easily create an account, but to straight up not offer any other way to log in to an app or game is just lazy on the developers part, and speaks to the way Facebook has lulled us all into complacency."

Source:

https://www.androidcentral.com/its-time-app-developers-fall-...


"Today, however, the company [Facebook] announced sweeping changes to many of its most prominent APIs, restricting developer access in a number of crucial ways.

Soon after, Tinder users started noting on Twitter that they had been kicked off the dating app and couldn't log back on, as those who used Facebook Login were caught in an infinite loop that appears to be related to an unknown bug.

Since you need a Facebook account to log into Tinder, this bug has potentially affected Tinder's entire user base.

...

Tinder has responded in a tweet, "A technical issue is preventing users from logging into Tinder. We apologize for the inconvenience and are working to have everyone swiping again soon."

Source:

https://www.theverge.com/2018/4/4/17200034/facebook-broke-ti...


"In a statement today, the social-media giant estimated 622,161 Facebook users in Canada had their data improperly shared with Cambridge Analytica through apps used by themselves or their friends.

Overall, Facebook says 87 million of its users were affected -- with nearly 82 per cent of them were believed to be located in the United States. ...

Canada's acting minister for democratic institutions has also said he'd be open to strengthening federal privacy laws, which don't currently apply to political parties."

Source:

https://www.cp24.com/news/more-than-620-000-canadians-affect...


  Zuck: Yeah so if you ever need info about anyone at Harvard
  Zuck: Just ask
  Zuck: I have over 4,000 emails, pictures, addresses, SNS
  [friend]: What? How'd you manage that one?
  Zuck: People just submitted it.
  Zuck: I don't know why.
  Zuck: They "trust me"
  Zuck: Dumb fucks


If I was Mark@Facebook I would take a strong defiant stance to the assaults and further bolden my brand. Facebook is a mass sharing community program and they are attacking its core, defend it!


It's poetic justice for company that started by 'scraping' MySpace text entry boxes and calling it their own. The DNA of facebook was bad from the start on many different levels from zuckerberg using a biz card that stated "I'm CEO bitch", calling his users dumb fu*cks, starting a platform by copying Friendster+MySpace to voyeuristically spy on users across college campuses and not doing something on an algorithmic level that was difficult to duplicate as opposed to what Google did. This is the difference between social media and Search, which is now made up of AI/ML/NLP.


I half-seriously agree.

People don't identify with the photo-ops-in-Iowa Zuck. I think they prefer the "you have part of my attention; you have the minimum amount" version.

He's going to be vilified either way.


This is propaganda put out by intelligence agencies.

So what happened here? Facebook users who took a 'personality quiz' allowed the 'app' to access their information.

It is extremely depressing that 87 million people are idiots.

There are intrusive apps all over facebook and this is being released with a narrative that supports 'election interference'.


No, 87 million people didn't take a quiz, the whole point is that previously if any of your friends took the test, your information was shared.

That's how a few hundred thousand people taking a "personality quiz" gets turned into millions of user-data.

And the term personality quiz is used loosely, an example of a "personality quiz" can be "Which Game of Thrones character are you? This isn't rigorous psycometrics.


You are only partially correct.

Social networks were extrapolated from friend data. Cambridge analytica was able to use social connections to profile people and who they know.

That was the extent of it.


You are only totally wrong.

This isn't "propaganda put out by intelligence agencies", and there are not "37 million idiots" who "took a 'personality quiz' allowing the 'app' to access their information".

But he already explained that to you quite clearly, something you should have already known if you were following the real news, and you still don't get it.

But since you choose to subscribe to the conspiracy theory that this is just all deep state propaganda, and everyone whose privacy was compromised was an idiot who asked for it by doing something foolish and deserves what they got, then there's no use in discussing it with you.

Because you're their ideal target and they've already successfully targeted you and influenced your mind, even though you didn't take a personality quiz yourself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: