Hacker News new | past | comments | ask | show | jobs | submit login
Facebook had ample warning about privacy problems with “contact import” feature (wired.com)
174 points by fortran77 on April 11, 2021 | hide | past | favorite | 90 comments



I have an honest question, hope it is not deemed troll bait.

I lurk and read quite a bit here on HN. I think I've come to feel pretty versed in the common anti-social-media gripes and stances, and that Facebook is one of the leading villains there.

But I also read that there are some really smart and empowered/supported people at Facebook and they make some pretty desirable stuff (things like React, etc).

On the one hand I picture a classic clueless business people driven culture driving poor demotivated engineers and so stuff like the latest hack happen.

But then, I see this other picture where teams of quite functional teams are doing good engineering.

I'm not used to these things coexisting well, especially for extended periods. Am I being too naive about that? Or perhaps my simplistic impressions listed above are too naive?

What's the real story? And Facebook insiders here who can add insight?

(Edit and disclaimer: I am neither Facebook user or a React programmer)


I'll bite. Worked at FB for a couple of years, until very recently. The HN groupthink on the company is quite strong, reaching conspiracy theory levels in some cases. Thing is, 95% of users (which is like, half the world) either do not know or care about the latest controversy.

Personally, I just do not buy into this "Facebook is evil" narrative, and even less so having worked there. Social media holds a mirror up to society, and we don't like what we see there. It's not our fault, it's Facebook's! So screams the press that is losing subscribers to new forms of media. Is FB sometimes incompetent? Sure (who isn't at that scale). Could they do things better? Of course. Are they trying to? Absolutely (the cynics would never admit this point, but it's true, e.g. the amount of resources and cross-company efforts thrown at improving privacy is incredible).

From my perspective, Facebook (and WhatsApp, and Oculus) have added a lot more to my life than they have taken away. My family and friends are all on Facebook, so clearly they feel similarly. Internally, there are indeed plenty of smart, empowered people that also haven't bought into the doom and gloom of what, ultimately, is a loud minority.

I fully recognize that my priorities might be different from those of others. Or perhaps they are impacted by the negatives differently. All opinions have their place, and I don't mean to speak for all employees either, just sharing my own point of view.


> Could they do things better? Of course. Are they trying to? Absolutely (the cynics would never admit this point, but it's true, e.g. the amount of resources and cross-company efforts thrown at improving privacy is incredible).

What about things like lying that phone numbers for SMS authentication will not be used for other purposes?

What about mental gymnastics to work around legal obligations that are privacy related.

EU regulations require allowing users to get their data in readable form - what FB is doing? Claiming that data is too complex, impossible to provide in readable form and therefore they are legally forbidden from exporting most of it. FB listed targeted advertisements as their obligations, and claims that as result they do not need user consent to spy on them as it is their legal obligation.

Maybe FB is mirror but they are busy mirroring nastiest - and the most profitable - parts.

For me their supposed attempts seem to be rather a smokescreen to me.


Look, my goal here is not to be an apologist for Facebook, a company that I told in my exit interview that I would not consider joining them again (for entirely different reasons). Nor do I want to be convincing anyone that they're wrong. In fact, in some ways you are right.

But at the same time, realize that Facebook is not some giant monolith. These are thousands of teams, working in a highly bottoms-up, impact-oriented culture. I read a comment earlier about engineering at scale and how it's a people problem - that's 100% it. People are not infallible, they make mistakes. What I did not see during my tenure there is any malicious intent, people genuinely wanted to build a valuable product and make broken stuff better. Again, my localized experience only.

From what I recall (could be wrong here, not first-hand knowledge), the SMS thing was a decision made by a single team ages ago, before privacy was at the forefront. It was a bad decision, but ultimately a technical detail lost in a sea of other technical details. The solution being worked on presently is a systemic one, that ideally would prevent this sort of thing in the future (though the realist in me is skeptical).


> From what I recall (could be wrong here, not first-hand knowledge), the SMS thing was a decision made by a single team ages ago

I have worked in a tiny charitable organisation, where it was easily understood that if we collected user telephone numbers, there would be constant pressure to use them for other purposes. And, therefore, that we didn't have the capacity to honestly and securely promise to users that we and our future replacements would use them only for one purpose.

This is not a failure of one team "before privacy was at the forefront" (when?).

It's a systemic failure of multiple teams and of Facebook's culture of constant privacy infringement and misdirection. That this implicit promise to users was made so lightly and not considered important enough to follow through on. As such, it's a breach of contract which I hope is picked up legally.


> This is not a failure of one team "before privacy was at the forefront" (when?).

If that was not a rhetorical question (and I suspect it was) from 2006 to 2016, the main concern around Facebook was monopolies and interoperability. I would know because that was the topic of my PhD. Many people argued for either open protocols (Stratechery regularly, TechDirt did a big piece about it a year ago, WaitButWhy several years ago too) that would allow individuals to extract their social graph (a list of their friends, or information about them) from one service to find them on another service, hoping that this would lower the influence of a single service. It’s graph-sharing tools like those that were key in all the recent “scandals” about the company, often simultaneously blaming Facebook for both not preventing other services from exporting information and being “a roach motel” where information never leaves (per Cory Doctorow).

One particular step involved here, the ability to find friends on Facebook is part of a common priority at Facebook: “growth”. Many people see it as a blind, profit-seeking effort. It’s actually motivated by altruistic means. Outside of the young, educated, technology-savvy classes of the Western World (where most people who could benefit from social tools already know how to use them) having contacts to more friends remains a major positive thing. It’s rarely financially beneficial for Facebook on the short-term to encourage that because it triggers cost without matching advertising revenue (most of the ad sales are still in the US, and to a lesser extend, Europe). “Growth” has lead to several misguided effort, or clumsy attempts (typically: People-you-may-know) but it’s not the rabid unbridled capitalist effort that opponents outside the company like to paint. It was often the primary objective of several teams, outside of interoperability, that contributed to parts of complex systems that ended up being abused later.

One final aspect that the original comment might not have clarified is that those tools are often poorly known, even internally; they can be years old and still hidden after people who built them and are familiar with potential abuse vectors have gone. Many people build other tools that can, in combination, lead to bad things, without knowing what’s there.

The team who build the infamous year-in-review (one guy really) was trying to demonstrate how to share several posts. He was surprised when it was massively adopted, and more so when it was promoted, without consulting him. The people who tried to raise that tool were curious about another question, nostalgia and the relevance of past posts. It lead to one person famously having to re-visit the grief of loosing his mother because no one consulted the team in charge of writing empathic messages. The priority then was not to squander good ideas with committees and encourage every idea, including privacy-preserving projects, and many empathy-defending ones too. I can personally confirm that many projects defending both (privacy and empathy) still massively popular today would not have been released without that third priority of letting individual employees try things without over-burdening supervision.

Until 2016, privacy was not a priority, but a non-negotiable primary absolute. It was enforced by the code (literally: the site is written in Hack, a version of PhP that has access-control hard-coded). Users could not see information they were not supposed to (say, a comment by an ex that had blocked them on a post by a shared friend) because of those hard-coded controls. It was understood to the most that company could do. Confusing situations challenged that absolutist view (like users volunteering whose pages their friends were fans of, something they could legitimately see, through an API, for financial gain) and privacy became not just an absolute condition of any work (that was easy to ignore because pushed one layer of abstraction down) but also a more nuanced approach that required conscious, expensive and sometimes frustrating war-gaming “leaks” through complex adversarial scenarios.

Because of that new priority, Facebook closed a lot of old APIs then, to the dismay of many researchers using those to analyse the company, its practices, including privacy and monopoly position.

> It's a systemic failure of multiple teams and of Facebook's culture of constant privacy infringement and misdirection.

No.


> What I did not see during my tenure there is any malicious intent, people genuinely wanted to build a valuable product and make broken stuff better.

But the argument is not that the many individual well intentioned engineers at Facebook (or even Zuckerberg himself) set out maliciously to create a monster; but rather that the confluence of new technology, human nature, incentives, advertisement, tracking etc. has resulted in something that has many negative consequences (which might exceed the positives); and of which they profit very handsomely.


I am not claiming that everyone working in FB is malicious. But if somehow someone missed any malicious intent from FB they seem to be deliberately ignoring glaring issues.

> People are not infallible, they make mistakes. What I did not see during my tenure there is any malicious intent

I never worked in FB but I consider things like https://noyb.eu/en/facebooks-gdpr-bypass-reaches-austrian-su... as malicious

> On 25.5.2018 at midnight, when the GDPR became applicable, Facebook has simply named things like "personalized advertisement" in its terms and conditions. Facebook now argues that it has a "duty to provide personalized advertisement" to the users, therefore, it does not need the user's consent to process his or her personal data.

Or https://news.ycombinator.com/item?id=26777739

> It's interesting how Guy Rosen, a co-founder of the spyware company Onavo, is now Facebook's "vice-president of integrity"


> But at the same time, realize that Facebook is not some giant monolith. These are thousands of teams, working in a highly bottoms-up, impact-oriented culture.

Okay, but let's cycle back around to your original point.

A bottoms-up culture in which mistakes like this can happen is not a culture that is responsible enough to hold information like my phone number, address, credit card information, etc... It's not competent enough to manage a voice assistant, or to have access to my contacts.

We're kind of dancing around the original poster's confusion, which was:

> "On the one hand I picture a classic clueless business people driven culture driving poor demotivated engineers and so stuff like the latest hack happen."

> "But then, I see this other picture where teams of quite functional teams are doing good engineering."

But you're not reconciling those pictures. You're trying to paint a picture of a good organization that means well and that is constantly getting better, but that also makes giant mistakes like misusing data and breaking data silos because it's too large to manage and because the culture encourages playing fast-and-loose with data.

The question is, when Facebook starts collecting IDs to validate accounts, when Facebook starts releasing voice assistants, or trying to launch a currency, how can we confidently say that these mistakes won't keep happening? Because to just say "people aren't infallible" isn't enough when talking about an organization that wants this kind of power and insight into everyone's information.

----

So here's how I reconcile those pictures: Facebook has talented people employed who like to believe they're doing the right things. But they're not willing to take this seriously or to really think about the impact of their products. Facebook is not holding off on collecting data because they're worried they might leak it. Facebook is asking people to verify information, they want your cell phone number, they're collecting PII on people who don't even have accounts. They want people to send money over their chat app, and to talk to businesses using their platform, and to schedule events, and to link their contacts, and send their location. They want basically all of the data, and none of these leaks has changed their attitude towards data collection.

So I don't think Facebook is incompetent, I think it's about average just like most other businesses, but the difference is that it wants to be seen as a highly competent, infallible company that's worthy of unprecedented access into people's lives until something goes wrong. Then, once everyone's phone number is leaked, it falls back on the "pobudies nefect" card, just like it already has so many times before.

And I think that's a big source of anger, because it's hard to look at that behavior without coming away with the idea that the company is either malicious or just grossly irresponsible. No, they're not malicious in the sense that they literally want to leak your information, but they're clearly not losing sleep over the fact that they're putting your information at risk, even though they've demonstrated over and over again that they can't keep it secure. I don't know what a good word for that is. If not "malicous", maybe "callous" or "uncaring"?

Facebook is a group of average-to-talented developers and managers who are nevertheless unwilling to take the steps necessary to actually protect my data, and they're actively hostile to me keeping my data from them (remember, phone numbers were leaked here from accounts that had been deleted, from people who had signaled that they did not want Facebook to have that information any more). So maybe Facebook devs "care" in the abstract sense, but they don't appear to care in any of the practical ways that really matter.

And I'm just so tired of people telling me that the culture at Facebook is changing when it is so clearly not changing. Facebook devs have been saying the same thing to me after every controversy for over a decade at this point. I'm excited that we're no longer experimenting on depressed teens, and now we've graduated to leaking every single phone number online, trying to cover it up, and then claiming to press that it's not a real leak. Maybe in another 10 years we'll only be misrepresenting impression numbers to advertisers. Progress!


> Personally, I just do not buy into this "Facebook is evil" narrative, and even less so having worked there.

Of course you don't. To buy in to the 'Facebook is evil' concept you'd have to see yourself as part of the problem and chances are that you would never do that.

Personally, I think Facebook could be good, and probably is good for many people. But I also see the dark side and the downside (shadowprofiles, for instance, a large collection of dark patterns trying to rope people in and to keep them there, the walled garden that it tries to be). It's AOL reimaged, and when AOL went the way of the Dodo I was quite happy.

If I had the choice between working on Facebook or on anything else I would pick the anything else option, because even if Facebook isn't even per-se I strongly believe that it is a net negative and its leadership disgusts me.


There are no shadow profiles.



I’m a direct source on that point. You are quoting indirect sources.


Facebook and Facebook representative lied about many things.

There is no good reason to assume that they say truth.

And in addition, why you think that you know `everything` about Facebook?


saving, even in hashed form, my phone number or mail address to later be able to connect me to people i already know is a shadow profile even if it isn't a profile as facebook might define it. Anyone can easily verify this by adding themselves to facebook (or xing for that matter) and seeing suggestions that are obviously based on that. In essence, you are either ignorant of this or lying.


I’m neither ignorant or lying. I don’t see why storing hashes of a number makes it a profile. The argument in the links shared is that Facebook is storing ad targeting information from people who are not connected, which is not true.

Your argument is that if you create a profile, you have a profile, therefore you had a profile before. I’m sure you are not lying about thinking that, so I have to assume that it’s the other option.


That’s a shadow profile dude, and you were lying when you said there isn’t a shadow profile.


> I don’t see why storing hashes of a number makes it a profile.

> even if it isn't a profile as facebook might define it

And this is the problem. It's a profile because you are storing information about me, without my consent to do so. Any information is too much when I don't want anything to do with you. Just because it's not obvious or easily accessible, doesn't mean it isn't there. So therefore you are very obvious ignorant on the existence of shadow profiles because you happen to define them more narrowly than I, or the GDPR would. Facebook has no right to even know about my existence without my consent, even if they don't target me for ads. I did not, and do not consent.


Facebook (not me, I don’t work there anymore) is storing information about *your friends*, namely a hash of *their* contact list, with their explicit consent to do so. Those numbers are only used to help friends find each other in the site, the exact reasons the information was collected. Facebook didn’t tell you to share your phone numbers with your friend. You did. If you want to enforce a rule against their ability to share it further, go for it. I’m not sure how you are going to do that, though: sue the phone company, or sue your friends?

The only thing that Facebook can do to help you right the horrific wrong of someone sharing your phone number without your explicit consent —well, just the hash of ten digits, without any other information associated to it, like your name or location, so an effective enforcement of its limited use— is to help you find all the culprits (presumably your parents, your cousin or your plumber), all of whom are obviously in need of much legal sanctions from the nearest privacy authority. Nothing says “Happy Mother’s day” like a fine for 4% of her worldwide revenue.

Incidentally, Facebook made the process of finding them easy: that list is the first thing you should see, when you _did_ consent and created an account. Because, well in the comment before you wrote that you absolutely did not consent, you did write that you created an account, during which you did consent so… Anyway.

How that (Facebook using information shared willingly by your friends for its intended use) violates GDPR baffles me a bit, and it seems to have escaped the authorities in charge of enforcing it too, because I don’t remember that tool violating anything but… Hey, being wrong has not stopped your previous comments, so I can’t be surprised it hasn’t stoped you to do more armchair lawyering.

I’d love to highlight that you are commenting on a public site, about Facebook. Do you expect people who help the company understand their brand, or how they can improve their service to read your comment? Store it? Analyse it? Did you consent to that, or do you refuse to look into how public statements work?

More seriously, why something so shocking to you seems perfectly legal when you actually look at what happened (your friend uploaded their phone book)? Well, because the reality is that information doesn’t have clear ownership edges like goods do. Information is more often than not shared, often with implicit rules (like don’t write my phone number on a bathroom stall in a dodgy bar with the mention “For a good time, call 123 456 78 90!”). Either parties can presumably do a lot with their information, like sharing it with a third party, while the other party might not know or have the ability to prevent it. Even today, there’s almost no rule clarifying how to handle triads like this. GDPR still assumes that I own information exclusively. Any interpretation that I know of assumes that I am free to share my address book, my DMs, my genetic material, etc. without much consideration for the others affected. And I agree with you: that’s a concern, and it is a concern that Facebook shares, but their calls for clarifications from authorities haven’t been heard.

Imagine that I use a crappy browser, or more realistically a spammy extension, that reads my emails and send them to a North Korean hacker group: my correspondents haven’t approved of me sharing their thoughts with said dodgy group. I made that decision, possibly accepting the risks, or not understanding them at all. You won’t make much progress unless you establish whether it’s my role, yours, my friends’, the browser, the company operating an extension store, my email-provider or theirs that should know better, warn, explain, block or hash messages.

None of the experts, lawyers, advocates that I’ve spoken to about this seem to care about that nuance, except people who deal with genetic information because tracking criminals through their relatives on ancestry.com has become massive. Well, they care but no one has an answer to what crimes are heinous enough that ancestry.com can profit from solving them and selling that service to the police. Oh, yeah because while you are offended that Facebook (a company whose service you have registered to, as per your previous message) you gladly ignore that your cousin gave away your shared genealogical information to a company willing to sell it to problematic institutions like ICE.

No one seems to realise that a lot of other people have your phone number and email by design and those people would want to share those with on-line platforms for legitimate reasons (personal archive, productivity tools, flag spammers) including reasons that could stitch more PII together. Well, no one except Facebook who has though about it, out of necessity, and has set-up several systems to match a few common expectations from what is still a very ambiguous situation.

But I’m sad to report that those expectations do not include: "We know nothing about dtx, but also we know that this is his phone number; we refuse to have anything to do with him, but also, we are going to store that you know him and he’s asked us to never have anything to do with him, so we need to forget that this ever happen. And learned the lesson from not knowing that."


> Those numbers are only used to help friends find each other in the site

Yeah right. In the same way as 2FA phone numbers were never used for spying, targeting advertisements and so on? Facebook lied also about that.

And even assuming that it is true: it still leaks info in horrible ways.

Like repeated cases of people treated by a given therapist getting friend suggestions about each other. Or people getting therapists of their friends as contact recommendations. And the same happening in other similar cases.


I didn't call you a liar, but when you start with "there's no shadow profile" and you end with "information your friends have about you is totally legal to store" you've said things out of both sides of your mouth, apparently without any feelings of espousing a contradiction.


Ok, let’s clarify then: what do those shadow profile contain? Let’s assume that you have a friend who has an active profile, uploaded a list of contacts through one of the importer (email or phone numbers). They have a FBID, posting activity, etc.

What do you think is stored about you on their profile, what do you think is stored on your "shadow profile"?


> Social media holds a mirror up to society

This would only really be the case without algorithmic influence on discoverability and what people get shown on Facebook. "Engagement" being one of the keyword here, which very much morphs this "reflection". Facebook might technically be a mirror, but if it is, it is one of those curved distorting ones you used to see at carnivals and fairs.


Not in support of the above poster's views btw, but I think he gave the analogy in a broader sense. Your take on this mirror analogy is from a micro-level. I still think looking at FB from a birds-eye view, the society mirror assessment is fairly accurate.


That is a fair point, though in the sense of "mirror, mirror on the wall, who is the fairest of them all". People end up in bubbles because they're most receptive to particular type of content and that is what they end up engaging with. But yes, if it was up to me, I would kill the Share button and get an unfiltered view of my connections' updates, like in the early days.


Exactly. Arguably the algorithms are optimized for engagement, but it’s like saying opioids hold a mirror to society because so many who try them keep seeking them out.


I completely agree, re: HN paranoia around FB. I feel like the engineers lose their minds (speaking as one) and need the FB boogieman to exist. Is there no room for the (very plausible) explanation that a group of people had benevolent intent yet the wheels fell off when they didn't understand the scale of what they were working with?

This happens all the time at work for many of us, we underestimate scale and shit breaks. I don't know why the same logic can't be applied to Facebook, albeit taken to the extreme, being one of the biggest data hoarders on the planet.


> Is FB sometimes incompetent? Sure (who isn't at that scale). Could they do things better? Of course. Are they trying to? Absolutely (the cynics would never admit this point, but it's true, e.g. the amount of resources and cross-company efforts thrown at improving privacy is incredible).

This type of argument is made over and over on behalf of harmful institutions. The police sometimes screw up, but everyone makes mistakes. Cars (as a sibling comment points out) sometimes kill people, but that's just a fact of life, and car safety improves every year. In each case, the institution's responsibility for harm is deflected by the argument that the institution, in its current form, can't exist without doing some amount of harm. Not considered is the possibility that the institution could fundamentally change forms or stop existing entirely.


I will take this argument and transpose it in Banking. A heavily regulated industry.

Are there TONS of data for an individual? Financial data, demographics (themselves and their family), tax-IDs, social security numbers, where they live, where they shop, how much they spend, do they have a subscription on "heterosexual-intercourse" website? do they have a subscription on "same-sex-intercourse" website? do they have a subscription on "donkey-sex-show" website? (think "Clerks"), Spotify, tickets to Metallica concerts vs Opera, Netflix, hotels, airtickets, restaurants, bars, you-name-it.

A bank has this and so much more information through bank account transactions, card (credit/debit) transactions, mobile phone app data collection, KYC, etc.

I don't remember a Bank losing the data that "Henry likes donkey sex shows". Perhaps there is a naughty admin stalking the current/ex partner misusing their work-related system account privileges but that admin won't leak the data of 100000 clients.

Yes the bank doesn't have a photo of you in a political rally, but they see you spend £€$100 supporting XYZ politican or political party, or you bought a hotdog 1mile/km from that park where X politician had a speech.

How come companies like FB keep making one mistake after another, on OUR expense and the excuse is "ooops I did it again" (they played with our data).

At some point the excuses need to cease and they need to get their stuff in a right order, or be shut down.

Edit: rewrote some words/phrases for clarity.


Finance institutions not only have frequent leaks, but they also happily sell your data.

Each major credit card company will sell you, if you’re big enough, list of all transactions that flow through their systems, not properly anonymized at all.

Banks use horribly outdated and unsafe technologies like checks, and use insurances instead of upgrading tech.

Trading companies make money by selling your orders streams and delaying your orders just enough, so they can get better deal than you.

Equifax leaked very detailed data, that can destroy your life, of almost all Americans.

How exactly is finance industry better?


That'a American/USA reality my friend :) It saddens me that you (plural - the nation) allow big corp to bribe politicans openly and shamelessly (lobbying).

Brokers: for $1k per month you can subscribe to a service that will feed you that same info, in real time. So.. your argument half-stands.

Banks: Most big banks are good/strong enough. Some smaller banks (1-20 branches) are weak. They haven's scaled enough. In the UK we now have 'open data'. It gives you the 'freedom'(?????) to share your data with 3rd parties. "what can go wrong"...

Equifax: thank US politicians for that. Bribing FTW!!!(aka Lobbying).

Credit card: they "can’t sell raw consumer data to third parties unless they provide the customer with a notice and an opportunity to opt out. In some states, customers have to opt in. They can sell anonymized data, with all personally identifiable information stripped out or hashed." (https://www.forbes.com/sites/petercohan/2018/07/22/mastercar...).


CapitolOne had a major breach in 2018. Banks do have data breaches, pretty consistently in fact.

So did equifax. I'm sure if you look around you'll find many, many more. The information exposed in those breaches is reliably worse than that exposed by this facebook breach, as an example.


> I don't remember a Bank losing the data that "Henry likes donkey sex shows".

Loosing, no but they sell it constantly. That’s the most offensive part of the Cambridge Analytica scandal: personally identifiable, politically relevant information like what you describe was and is still sold at scale, but no one is going after the companies selling it because of that bogeyman. Facebook was blamed for something that Experian buys from credit card companies and sells to political consultants, for something that the Post office sells to political parties.

Edit: Specifically, in the US, Cambridge Analytica (and every similar company) uses credit data to know your car model; they use Post office data to know which magazines you subscribe to and they match that through your name and address to your voter record to have party affiliation and participation. Guess what arguments would resonate with a Republican voter, subscribed to _Guns & Ammos_ and still paying for his Ford F150? I know that from having personally discussed with CA Data scientist about their models.

If you want details, check the Netflix documentary "The Big Leak" at exactly 1:00:00, you‘ll see Brittany Murphy showing the screen of her laptop, with a list of all the databases they used, with sources and exact numbers. It clearly says that Facebook data was scraped (it’s either public information, like data from Ads audience network or voluntarily shared information like people clicking on pages and apps controlled by CA); the other databases were purchased.


Who says its at your expense. No one forced you to get a Facebook account, that was YOUR choice.


Oculus users might disagree.


Facebook is infamous for building "shadow profiles" of people that have yet to register as users.


WhatsApp users might disagree


> Is FB sometimes incompetent? Sure (who isn't at that scale).

Do you mean to imply that we should be more forgiving of Facebook because of the scale you operate at? I feel like it’s the exact opposite. You very aggressively sought to grow to that scale, it’s not something that just happened to you. With that should come an immense sense of responsibility, but instead Facebook has stepped up with reckless behavior, lies and manipulation.

> the cynics would never admit this point, but it's true, e.g. the amount of resources and cross-company efforts thrown at improving privacy is incredible

Sure, call me a cynic. But I imagine this is where your perspective as an employee might be distorted. From the inside it might look like there’s a lot of effort put into something like this, but from the outside it’s clear it’s nowhere near enough.


> From the inside it might look like there’s a lot of effort put into something like this, but from the outside it’s clear it’s nowhere near enough.

I actually do believe it. The thing is, no matter how much work the privacy people put in, it will never be enough to solve Facebook's privacy problems. The problems are fundamental to the way Facebook works as a product.

It's a pattern you see in many places:

1. Identify a fundamental problem your organization causes.

2. Form a team responsible for fixing this problem.

3. Make sure this team is organizationally separate from the teams actually doing the work.

4. Give this team lots of resources and have them share whatever progress they manage to make.

What will happen is that this team will be able to make some improvements. Whenever someone criticizes you, you can point to these improvements and say (completely honestly) that the team is doing the best they can. Meanwhile, the core of your organization is protected. Any changes that would threaten the processes that sustain the organization itself can be written off as unrealistic.

EDIT: I should add that this pattern can appear without any malicious intent -- it's natural (though perhaps naive) to think a "privacy team" is the right way to solve privacy problems in one's organization. But we owe it to ourselves to be honest about what is really happening. Not just for Facebook's sake, but also for the sake of any organization which models itself after Facebook. Without an understanding of how this stuff works, organizations will end up reproducing Facebook's fundamental problems (privacy and otherwise) while believing those problems have been solved.


>Social media holds a mirror up to society

People also say cocaine brings out the true self, but I wouldn’t know.

> Facebook is not some giant monolith.

Harboring this thought has no actionable constructive value. Assuming the best intent, people here aren’t disgusted at fb engineers because they think they are making day-to-day evil decisions. They’re disgusted because they think fb engineers would have to believe facebook is overall a force for good, or at least neutral, to be able to continue working at facebook.

But rarely does an fb engineer here have the courage or the care to defend facebook in such a way. Often, they just point out that facebook is not a monolith.

I remember reading about a collection of interviews with drug dealers. The interviewer said the most common refrain they often heard from them was, “if I didn’t do it, someone else would.”

Even so, everyday I thank whoever took these specific FAANG decisions (off the top my head):

…adding the “Add to Home Screen” button on iOS safari for web apps.

…adding Firefox support for Google Meet (Microsoft teams and Zoom are still Chrome only I think)

I say this despite knowing these decisions were most likely not taken with altruistic intent and probably came from the top, relatively speaking.


Over time I came to compare facebook with the car culture. It plays a central role to a ton of people’s lives, provides a widely needed service to communities and allows some lifestyle that wouldn’t be possible otherwise. But with a incredible amount of casualties and second/third order negative effects on the whole environment.

The debate will always be “is it worth it ?”. As we dial back on car presence in a lot of situation and try to counterbalance all the lobbying and bullshit that has been thrown at us by the industry for several decades, I see us coming to the same point with ad based/attention based industries. It’s not “evil”, but needs a crazy amount of adjustment to come to a healthy point were the benefits are not drowned into societal pollution and casualties.


Car companies have had many documented effort to kill control and mitigation efforts. Most debate are fairly straightforward and aligned: less pollution, less death from speed. And they have done very little in that direction.

Facebook aggressively tries to mitigate issues, but struggles to get its critic to admit that they can’t ask for more interoperability in one sentence and more privacy in the next; for more freedom to say what you please and more control of unacceptable speech; for respect for local mores and universally understandable rules. Facebook employees are constantly trying to argue for nuance, offering solutions and getting downvoted and silenced because Facebook=Bad.

Find me one employee of a car company willing to breath from a the exhaust of an ICE. Even suggesting you might rightfully gets you sent to a psych ward on suicide watch. But we can make cars non-polluting, they just chose not to; they refused so hard that electric cars, a technology that is a century old was presented as inconceivable until they started loosing sales to a weirdo a little too much fan of anime.

What every employee of a traditional car company can easily conceive, and is very keen willing to excuse is a slimy tale of Union reps and executives paying prostitutes to convince Environment authorities to look the other way when they evade basic pollution control (seriously, the VW scandal is dark and human trafficking isn’t the bottom of it). Talk to your local oil & gas executive about “human rights violation as a cost of doing business in certain part of the world”. All that the cost of dozens of millionth of people dying of respiratory complications, and billions risking environmental disaster.

How to have fewer than a million people killed in accidents every year? There’s many way, but a very easy one would be to prevent cars from driving faster than maximum speed limit on a highway. It’s an elementary tool. Find me one employee from a car company who wouldn’t laugh you out of the room for suggesting it. “Nobody would accept to not be able to repeatedly break the law if that meant we could save millions of lives.”

There is nothing in common with that nightmare and the debate around the legibility of privacy controls, or the rate of false positive on automated translations used for violent content detection when dealing with non-Ascii language.


> but struggles to get its critic to admit that they can’t ask for more interoperability in one sentence and more privacy in the next

This is a conflict that is not nearly as fundamental as Facebook likes to pretend. In a lot of ways, the Cambridge Analytica controversy was a godsend to Facebook, because ever since then the company has been repeating the same tired line that APIs, data standards, and personal control of information are fundamentally at odds with privacy.

For the most part, they're not. A lot of what Facebook gets criticized for on the privacy front has nothing to do with interoperability. It's only a conflict because Facebook has a particular lens of privacy that revolves around Facebook owning as much information as possible and being the single steward of how that information is shared and accessed.

And I'll be the first to say that Cambridge Analytica was poorly reported. I'll even be the first to say that some privacy "advocates" don't have a good grasp on these issues. But my goodness if Facebook isn't engaged in exactly the same kind of propaganda, constantly encouraging the exact same confusion between scraping and leaking, between consenusual data-sharing and hacking, whenever it fits their purposes and whenever it gives them an excuse to tighten the moat around their product. This most recent leak (I'm sorry, "scrape") is a perfect example of Facebook trying to explain away a poorly designed feature with a massive security hole around authentication and validation as if the real problem is interoperability and APIs.


> repeating the same tired line that APIs, data standards, and personal control of information are fundamentally at odds with privacy.

It’s not in general but it is if you share information with your relatives and they don’t understand the consequence of their actions. That unique property of social data is something that creates that contradiction and something that privacy advocate keep missing.

> But my goodness if Facebook isn't engaged in exactly the same kind of propaganda, constantly encouraging the exact same confusion between scraping and leaking, between consenusual data-sharing and hacking, whenever it fits their purposes and whenever it gives them an excuse to tighten the moat around their product.

There’s a lot of people at Facebook, me first, who would love to have a different communication, but we know from experience that we’ll get pummelled if we try to introduce nuances like that in the public debate. I always find that frustrating until someone reminded me the trope about having two wolves inside of you. The story is corny but it explains the problem with coverage of Facebook really well: the misguided critics make it so that only spin-doctors manage are the only ones who can have a positive impact on the company’s image.


> and something that privacy advocate keep missing.

The majority of privacy advocates understand that distinction, it's just that our criticisms of Facebook go beyond that distinction to a bunch of other issues. If you want to talk about nuance, then tackle those issues. Tell me why Facebook is choosing not to notify users about this breach. It's really, really convenient that a supposed lack of nuance is an excuse to avoid talking about privacy breaches and bad policies across the board.

I mean, to be blunt, it sometimes just feels kind of insulting. I'm supposed to believe that Facebook would love to talk about what went wrong here and why an authentication issue and a clear vulnerability in their API is being labeled as a "scrape" instead of hack, but instead they have to be silent about the entire thing because nobody on Hackernews would understand their "nuance." Come on.

And I really do believe that the majority of complaints that most people have about Facebook are not related to API access. It's ridiculous crud like misusing phone numbers that were intended for authentication. It's this leak that we're talking about right now. It's the propaganda and fearmongering from official accounts and press releases that's being spread about iOS's new privacy changes. It's tying unrelated products like Oculus into Facebook accounts after customers were assured that wouldn't happen. It's not just accepting information about me from other people, but aggressively targeting those people to get that information. It's everything that happened with Instagram and Snapchat.

I am perfectly willing to talk about nuance in the very narrow case of Cambridge Analytica, and in the very narrow instances where "who owns information about me" is actually relevant to what Facebook is doing. But I don't think Facebook is desperate for nuance and just can't get anyone to have a productive conversation, I think Facebook is using nuance as an excuse to avoid talking about the litany of issues it has in other areas.

> There’s a lot of people at Facebook

This is a weird point, because it plays both sides of the issue. I'm supposed to excuse a lot of bad messaging coming out of Facebook because the people saying it might not represent the full company, but I'm also supposed to ultimately say that Facebook as an entity is trying to do the right thing. I'm supposed to trust "Facebook" with my data, not an individual team.

So it doesn't make me feel better to label Facebook's messaging teams as un-unified, it just makes it harder for me to trust anything that comes out of their mouths because nobody seems to be willing to say "this is what Facebook believes as an organization." This is one of the most powerful social media companies in the world. At what point does it become reasonable to ask the company to get its crap together and present some unified messaging?

Or, alternatively, if Facebook is so gigantic that it contains multitudes of different teams that have different motivations and messaging and its impossible to think of it as a single entity, at what point does it become reasonable to ask Facebook to stop representing itself as a unified entity and to stop pulling products like Instagram, Oculus, and WhatsApp closer and closer to its main services where upper management and other teams can interfere with their operations?

The argument you're making about a disjointed company is a really strong case for having some good data silos, and based on your other comments it seems to me that you're saying data silos have gotten considerably worse since 2016 because the company wants teams to more easily integrate and collaborate with each other.

I guess what I'm saying here is, when I sign up for a Facebook account who am I trusting? Am I trusting a specific team? Can I get to choose which teams I trust with my data? When a press release comes out from Facebook, will all the authors and chain of management behind it be identified? Because if Facebook wants me to treat it like a unified entity, then I'm going to do what it wants and treat it that way. It can't have it both ways.


If someone from Facebook PR reads your response, they’d very much prefer me to not engage with you. I’m assuming that’s obvious but happy to explain why. Because they have more authority over communications, the more you write like that, the more they’ll have elements to make the case for "Less is more" and even stricter rules about speaking about the company publicly. You’ll get a message that ignores your specific points. That’s who you trust to draft public statements: they’ll say as little as possible — because that’s the wolf that you chose to feed.

People who you trust with your data are privacy specialists, literally every single of of the best that money can buy. The company is very large and full of people who care about it deeply and the internal culture means that you can’t get away with bad design. Every hacker in the world, starting by the people Snowden worked with, are trying; preventing them from winning motivates a lot of smart people.

You might not like the disjoint between what you hear in the press and the idea that some of the most sophisticate activist are working on making Facebook the safest service out there. That’s because, as I pointed out many times, most of the coverage, including most of the things you raise, is misleading or plain false.

A common pattern is around settings: a piece of information can be visible or used in different context. Someone makes that information open; screen-captures the information being shared. They then change their settings; screen-capture that and then claim that the settings were abused. It sounds too big to be true, so no one will believe they can get away with it, but they do, often. That article is guaranteed to trend; any denial will be buried, or more likely used against the company. Previous scandal will be re-hashed, especially similarly made-up ones.

Some people like me will be outraged. They’ll try to explain that what happened, that truth is the exact opposite of what was reported: "Facebook didn’t commit that, they _prevented_ it. They were the only one who did anything!" We’ll write “I _worked_ on that project personally” “I’m a primary source for this” and that comment will get downvoted and ignored. My comment history is full of those. I don’t think you want me to explain individually why every scandal you raised isn’t something I’d be worried about, but… I’m not. Some decisions were poorly explained (see my first paragraph for why) but the actual, internal motivation was sound. If there’s anyone that worries you particularly, happy to answer specifically.

I understand that I’m asking you to trust a pseudonym on the internet, against literally the entirety of the press and most of the content of social media. Trust me: I know how confusing things are when so many people are trying to gaslight you _about your own job_. It gets weirder because they are some real, bigger issues (that oddly no one cares about). I started early (with my PhD in 2005-2008) so I got into that gradually; I think that has made me able to handle that better than most. But the weight of that constant unfounded hostility is heavy.

I’m simply trying to answer the original question: Why people stay in spite of all the scandals? Because they are genuinely making things better from the inside. Because it’s easy (necessary) to ignore all the noise when, for those that you have more context about, it’s mostly lies.

Many people leave, me included, but that’s generally a far more complicated problem, related to the fatigue that this dichotomy carries, and internal priorities.


> People who you trust with your data are privacy specialists, literally every single of of the best that money can buy.

The best privacy specialists that money can buy are working in an organization that has decided it doesn't have to proactively notify its users about data breaches. That is the wrong decision, regardless of the internal reasoning that brought the organization to that conclusion. There might be a really engaging, interesting, reasonable set of events that led Facebook to make that decision, but it's still the wrong one.

So the picture you end up painting in these comments is of a highly motivated, talented group of individuals with good intentions fighting tooth and nail against an organization that doesn't seem to be listening to them. And honestly, sure, whatever. I can buy that there are experts in Facebook working to make the system better. But clearly they aren't winning.

So we go back to my original point, which is that when I make a Facebook account and when I hear from Facebook's PR team about how asking users for consent to track them is "killing small businesses", I don't get to choose to only trust or listen to the privacy-conscious amazing people who are fighting to fix problems. I also have to trust my data to and get my information from the giant organization of marketers, PR people, advertisers, and managers who make up the rest of Facebook and who seem to have a lot more power than people like you do.

> but the actual, internal motivation was sound.

To expand on the above point about internal motivation, and to address your claim that all of this controversy is just bad faith reporting:

I do not believe you if you say there's a good reason to link Oculus accounts to Facebook accounts and to tear down the data silos between those products. I do not believe you if you tell me that it's OK for Facebook's marketing team to spread FUD about Apple's privacy settings. I do not believe you if you try to play off those issues as misreporting or the media lying to me. They are real things that have happened.

Or if you want to dig in even deeper to things that can't be described as one-off mistakes or miscommunication, you are never going to convince me while actively using a pseudonym online that Facebook's current real-name policy is a good idea. It's a policy that is still problematic to this day, even after Facebook promised to try and make it better. I understand why Facebook has that policy, I understand that there must have been a lot of nuanced conversation internally that led to it, I understand that Facebook has a point of view that makes the organization think the policy is logical. But it's still not a good policy, and we can look at the effects and see that it didn't turn out well.

I can believe you that there were long, complicated, reasonable internal processes that lead to these problems. I can believe you that people didn't wake up in the morning and decide to be evil. But the end results are still bad for privacy -- not because the media tells me they are, but because I'm a functioning human being who is capable of looking at the outcomes of those decisions and reasoning about the world.

Heck, we can even go a step further. You've tried to play off Facebook's API decisions as being about privacy instead of monopolistic behavior. We started this conversation with me pushing back at the idea that people needed to choose between privacy and and interoperability. But your opinion on that is not reflected by Facebook's internal chain of managers, and I'm not basing my opinion on secondary sources. I read the internal emails that were exchanged about its competitor's API access -- not snippets from news sources, I read through the actual leak. You can't play that stuff off as if there's a hidden world that would change everything if only I could see it. Because I personally got to look into that hidden world and see the reasoning that managers were using in their own words, and it was all pretty bad. It's not media bias that's causing that perception.


As I mention many time: I know from direct knowledge that the assumptions that you are making are false or misleading. I’m happy to explain why but I see no value in being told that I’m wrong by someone who has less context than me. I respect you desire to not listen to me. I don’t respect your insistence to tell me that I’m wrong without knowing what I have to say.

I’m just going to correct things that you seem to attribute to me that I have not said:

> against an organization that doesn't seem to be listening to them

Nope, the organisation listens; they listened to me closely. They just learned that very few people outside the organisation do and they try to protect from the frustration of engaging with dishonest people. You certainly do not care to listen, for instance.

> you are never going to convince me while actively using a pseudonym online

I’m using my first name. I’ve used it as my handle in any service that allowed short handles. I’ve never met someone else with it, so I always assumed that it identifies me directly. Facebook the company is not against using pseudonyms: Instagram uses them; WhatsApp has no names.


I shouldn't reply to this after so much time has passed; it's petty and no one cares. But just for the record:

> the assumptions that you are making are false or misleading

Nothing that I listed in the second section of my comment was false, it was all factual information. Facebook did take out full-page adds with misleading and inaccurate FUD about Apple's privacy stances. Facebook did merge Occulus and Facebook accounts. Facebook does, still, to this day, ban pseudonyms. Facebook has never claimed that the leaked emails describing its internal policies towards competitors were faked.

Those are factual statements.

If you want to claim that I'm lying about them, then that's a really bold claim that you should own in specific terms, not just allude to in vague statements about whether or not I'm willing to listen to you.

> You certainly do not care to listen, for instance.

You have never provided a single example throughout this entire thread of any of my examples being factually inaccurate, but you're characterizing me as being close-minded for not taking you at your word with zero evidence that every critique about Facebook is wrong.

You're upset that I don't know what you have to say, but you haven't said anything. You set up an example of a hypothetical critic who doesn't understand how shared contacts work, and told me to extrapolate from there. So yes, I do think you're wrong, because absent any evidence of the contrary, the fact that you've worked at Facebook doesn't mean you magically have context that makes Facebook's real-name policy not a problem.

Again, unless you want to claim that the entire media landscape is outright committing libel and that Facebook doesn't have a real name policy[0].

[0]: https://www.facebook.com/help/112146705538576


To me Facebook’s main issue is having a revenue model dependent on advertising boosted by engagement.

This is the main reason we end up with filtering bubbles, contents that needs to be consumed fast and push people to react, relying more and more on anger over most other emotions, with no need for veracity or depth, or bringing anything lasting to the user; the other side of the coin being the privacy debacles coming from monetizing that engagement.

I don’t see facebook doing any effort to mitigate these effects on their users. On the contrary we had many reports of research to increase engagement, direct the users emotions and test the different algorithmic strategies.


> Facebook’s main issue is having a revenue model dependent on advertising

Good news, that changed last year.


> Social media holds a mirror up to society, and we don't like what we see there.

Well yes, we humans have flaws. Now, there are some institutions, habits, and organisations that ameliorate these flaws and enable us to become better versions of ourselves ("better angels of our nature").

And then there are things that prey on our baser motives, pander to them, exacerbate them, and profit from it. And Facebook is surely among them.

Is Facebook only possible because our flawed nature? Sure. Does that exculpate it? No.


> Absolutely (the cynics would never admit this point, but it's true, e.g. the amount of resources and cross-company efforts thrown at improving privacy is incredible).

I would have to disagree there. Their core ethos has been pretty bad on this front. I quit Facebook around the time edu addresses were first no longer required. Things like the wall were the major features.

The main reason I quit though was because they were routinely reverting and NOT respecting my privacy options. It got to the point where I was taking screenshots of my privacy options to make sure it wasn’t just something I was missing.

I reactivated about 2009 but quickly disabled against because of the same reasons. And rather than regularly combing through my options I chose to opt out entirely.

Multiple people I have spoke with since, many of whom aren’t tech savvy have said the same over the years, even recently.


Not sure I understand the argument here. I'm clearly talking about current/ongoing efforts, not decisions made 15 years ago. Anecdotal evidence doesn't change the fact that I witnessed huge cross-cutting efforts to improve privacy. At Facebook's scale, it's not exactly an overnight process.


Well in my defense your evidence is as anecdotal as mine here. The habit of making all options opt out and actively resetting past opt outs over the course of, to your point, decades isn’t exactly what I would call a focus on privacy.

If they were still up to the same practices 15 years ago, 10 years ago and as recent as the last few years....and to your point at scale it doesn’t chnage over night...how are they exactly privacy conscious. The company since its inception is basically antithetical to the concept of privacy and ownership of the data you place on their service.

Edit.

To be clear. I don’t think there’s anything wrong with their model. Not a fan of their opt out style approach but that’s clearly not a major issue for most. I just wouldn’t categorize Facebook as privacy focused. At best they seem to respond in a minimalist way to these concerns. Which again is fine. But I wouldn’t say it’s a focus for them.


These days, one of the main - if not the main - cause of Facebook's (and other GAFAM's) evilness is simply that they have gotten too big.


> Personally, I just do not buy into this "Facebook is evil" narrative, and even less so having worked there.

How would you characterize a group of people who built a product used to incite a genocide?[1]. (That’s not hyperbole, that’s literally what happened!)

Even if you attribute this to incompetence (is incompetence which leads to genocide excusable?), what is a reasonable reaction once you’ve discovered your product has been used in this way? Has this been Facebook’s reaction? In my eyes, clearly not, as it continues to be used in similar ways to this day.

[1] https://www.nytimes.com/2018/11/06/technology/myanmar-facebo...


The New York Times printed articles and columns that were used as fact and evidence for the invasion of Iraq in the early 2000s. Should we cancel the New York Times? We live in a world of imperfect systems built by imperfect people.

Everyone sitting in their glass houses just throwing stones without viewing things from the other side. Do I condemn genocide, of course. Most people condemn genocide. Yet I'm not going to blame Facebook for building a platform for _people_ and then turn around and blame Facebook when _people_ use the platform for nefarious purposes.

Something about cake and eating it too. Even removing Facebook from the equation, the internet itself is ultimately responsible for genocide because it allows people to connect easier than ever before. Are you going to blame the very concept of the internet? Because if you're so up in arms about Facebook, you should be rallying for the destruction of the internet as well. Good riddance, have some perspective.


I think that's a odd perspective to take.

It's like saying that Toyota enabled terrorist groups to move it's soldiers and weapons from point A to point B to incite terrorist acts because they manufactured cars.

The difference is that Facebook did not create a product TO incite genocide (I think we can call agree on that). However, they did create a platform which brings people together AND bad actors used it as a platform to divide people.

Did telecommunications companies and mobile phone manufacturers such as Apple build mobile phones to incide genocide, murder, child rape, drugs etc. No, because mobile communication is known to be used for good and for bad.


Quick reminder that Facebook knowingly facilitated a genocide of the Rohingya people.


Did radio knowingly facilitate the Rwandan genocide?


"Radio" is not a company that profited off the genocide and looked the other way when it happened.


No, but it's the closest modern equivalent. A communication medium is used to facilitate bad activity, so clearly the correct thing to do is to blame the communication medium.

I'm sure Myanmar was full of shiny happy people before Facebook got there, after all \s.


> profited off the genocide

Also, it's ludicrous to believe that FB actually make any money of Myanmar. They make money from serving ads to the developed world, not poor developing countries.


No, that’s a lie.

The local junta used Facebook like they would have used any convenient mode of communication. Facebook wasn’t able to detect it because of a complex issue involving automated translation of non-Ascii languages. They were concerning reports and the relevant team reached out to their contact in the country, the wife of a senior executive and _a Nobel prize recipient_ who both personally denied that anything wrong was happening.

The company scrambled to get more context but didn’t have translators willing to tell them the truth because… well there’s an uncomfortable correlation between people speaking Burmese and people who thing Rohingya deserve respect.

As soon as the reality was clear, the company acknowledged that they made a mistake in trusting local authorities, not investing more in non-standard software tools and piled on their investment on detecting such project.

You can compare that to how much Toyota cares about their logo being at the back on the technical on every photo featuring genocidal warlords and terrorist groups.


You're making excuses. Facebook didn't hire translators even though they were profiting off the genocide. They didn't cut off services. It wasn't just the military but nationalist Buddhists were outing their neighbors, doxxing them as Rohingya, condemning them to death via Facebook.

And for you to suggest that Facebook shouldn't be held responsible for their actions because other tech companies would do the same, is lazy and dishonest. The fact is that the genocide was lead by Facebook, not some other tech company, and I will keep reminding people of this event every time some apologist tries to ignore what happened.


There are some very smart people who either consider privacy as unimportant or dead anyway.

There are some people working in Facebook who consider it as evil but considering salary as sufficient reason to work there.

There are some deluding themself that they can make FB less evil.

I think that some smart people make mistake of assuming that other smart people have the same priorities and ethics as them. It is not true.

Some of smartest people do far more evil things than things ever done by Facebook.

See FSB, KGB, NSA, Mossad, finance companies, chemical companies, lawyers, military sector, military for many examples people doing clearly evil things despite being smart.

To clarify, there are people in mentioned places also doing non evil things or doing evil things and being dumb. Not everyone in chemical company is covering up Bhopal disaster and similar or setting up the next one.


> classic clueless business people

Why you think that FB is managed by clueless business people? They are clearly not clueless, just with completely different priorities mismatching mine and with completely different opinion about ethics (AKA "they are evil").


>On the one hand I picture a classic clueless business people

They aren't clueless at all. Privacy isn't a priority for their business, in fact it would actively harm revenue.


I worked at Facebook from 2009-2012 and again briefly in 2016. "Answers" like this are why you will not read real answers.


Then what is a real answer in this case?


The problem isn’t simple.

To take one example: Banning racists requires that you have a clear, well-understood but also nuanced view about what is incitement, dog-whistle, free speech, etc. Having large teams enforce those require that everyone in those teams understand those rules and apply them fairly. In those divided times, I challenge you to find anyone able to articulate the agenda both the American Left and the American Right coherently. So you can have multiple people review, but will you get a decision without a process that most people will find deeply wrong?

In addition, they have to be international which opens even bigger cans of worms. Casey Newton has great reporting on the centres that handle those; I strongly recommend you read those — with the provision that you realise Facebook doesn’t manage those centres, they are contractors, so Facebook also has to make complicated decisions about how to insure those centres are not abusing their staff, performance is properly rewarded, etc. without overstepping. You can imagine, say, gender relations among a group of people who speak certain languages ::cough::indi ::cough::. Now find me Burmese-speaking people who think that the rules about human-rights also apply to Rohingya. Or how to realise that your team dealing with gender-related issues is managed by TERFs? How do you manage for all of those if you’ve never heard of Abkhazia, how language and castes interlace in India, the religious dimension of the conflict in Somaliland, common forms of gaslighting about ethnic relations in China? The goal is not for you to know it all, but how do you set-up processes to not be blind to the fact that the prejudice is already in your (contractor’s) house?

Also, on occasion, those rules have to be legally enforceable. Say you have a friend who lives in Turkey, or in Germany, it is legal for you to comment on their post saying that Ataturk was a little bitch, or that Hitler was a great man? Well, it might not legal for them to read it… So do you hide? Do you add a message to say that you hid something?

What is simple is that engaging with shoot-from-the-hip, Facebook-is-bad comments will never end up in a better place.


It’s got a good bottom-up culture focused on empowering engineers to build stuff like React and ship changes to the social media platform on their own initiative (especially when those changes prove effective in improving key metrics in A/B testing).

This same bottom-up culture leads to a lack of oversight on potentially questionable changes, and also means there is sometimes a lack of ownership and investment in fixing issues that haven’t gotten media attention (because no team decides to work in the fix). In recent times there has been quite a bit of investment in improving privacy but it feels a lot like trying to patch over a broken foundation.


” Number of employees 52,535 (June 30, 2020)”

Plenty of space for both groups


So the good and functional engineering teams work on UI toolkits and all the poor programmers shepherded by non technical management work on the ever suspect contact scrapers at another campus or something?


More likely it is compartmentalized in such a way that every single person has plausible deniability sufficient to believe that what they are doing is good “I help people find each other!” As Upton Sinclair noted, it is hard to make a person understand something when his salary depends on not understanding it. Doubly so when the salary is FAANG high.


Do you think this also holds true for americans who develop weapons to kill people? American drone pilots dropping bombs on cities? Palantir employees, bank employees, lobbyists, people working on DRM and censorship systems, politically partisan reporters, etc. etc. etc. etc.

I find this hand wringing around Facebook employees very wierd when it's so trivially obvious to find people that outright build systems for murdering other people and are even proud of it. If those people exist in hundreds of thousands, why is existence of people that build a webpage that sells advertisements and shows messages from people this wierd? In what way is it worse working for Facebook than working for a news outlet to write fake news that outright attacks LGBT communities? And why can everyone understand existence of those reporters and then wonder why people still work at FB?


I suspect weapons-manufacturers and soldiers are less self-deluded, to be honest. They believe there are bad people in the world who need to be killed to keep America safe. They also believe in the principle of civilian control of the military, and it's not the military's job to second-guess the elected leadership of the country. Who specifically gets killed is above their pay-grade.

Contrast this with many FAANG employees who may believe they personally are "making a difference" and continually second-guessing and sniping at their (unelected) corporate leadership because they need to feel that they're On The Right Side of History.


You are contradicting yourself a bit in your second paragraph. Employees who want to be on the right side of history have stopped Google from doing projects related to weapons and Chinese censorship, which is literally making a difference.


An interesting observation, the low effort (and zero knowledge) answer (answerism?) is that it's easy, low-friction and rewarding to show off technological excellence in things like React etc, but executing "ethical excellence" or whatever you'd call it would be the exact opposite.

In the end, like so often, it's just the old salary-induced difficulty of understanding.


The vast majority of the coverage of Facebook has large factual issues that most employees would know about. Seeing their work, or the work of people they admire, misrepresented just makes employees distrust the media.

It’s easy to have Gell-Mann amnesia when you notice something wrong every year at most. It’s harder when it’s every day.


The flaw is that young idealistic techies and would-be business people keep making the mistake of believing you can grow a large multinational corporation that doesn't end up becoming at least slightly evil. The evil is inherent in the scale and structure of these entities and it comes from the bureaucracy of many competing and self-defeating business incentives and the ultimate decision/realization that "making more money" is the only sane holistic goal to globally optimize for. Simply deciding that your company will contribute to open source or have some motto like "don't be evil" is not enough to avert this pitfall.


Money buys you everything, including highly skilled and capable engineering teams.

Lackeys are lackeys, they will do what you ask as long as the crumbs are big enough. The hand that feeds and all that.

It is obvious how Facebook benefits from obtaining this info, thus the onus is on them to prove otherwise. They haven't, so they didn't.


I’ve been one of the very first critic of the company (suggesting they’d become a monopoly and current laws wouldn‘t be fitting, submitting a PhD about that in late 2004). I’ve worked at Facebook (twice) working on controversial projects and debated about the company a lot in the last 17 years.

1. Most of the reporting is just factually wrong. For instance, Facebook didn‘t sell data to Cambridge Analytica; your bank did sell, with your name, for profit. Facebook didn’t know about the Rohingya genocide: they had a Nobel prize laureate telling them the messages where harmless. It doesn’t matter that Facebook did all was legally possible once they realised what was happening then, there is such a stain on the brand that opponents are actively protecting the people who are still doing harm to this day just because it let them feel good about criticising the company. It’s horrifying. And when employees see that, they are rightfully happy that the company is powerful enough to ignore the critics and do what they can to, say, prevent marketers from using the data that your bank sold them. Or not throw the only hope that Myanmar has of ever being a democracy to the critics because she lied to them.

2. There’s a lot of bad decisions happening. I’ve personally seen several issues that are far worse than what has been reported. Seriously. Which is stunning — but journalists are so latched on their stories (and so hurt to be called out on their lies) that they refuse to hear far more problematic and nuanced issues. I’m not making this up.

3. Most of the real issues are hard, and far outside of Facebook’s control, in spite of them trying really hard and often publicly. Take how bonkers is the American Right right now (sorry to all the readers of HN who think that QAnon is a reasonable group, or that Trump hasn’t raped underaged victims of human trafficking, or never paid for an abortion after raw-dogging pornstars for 40 years). Sure, it looks like banning the racists assholes would be a healthy decision, but it’s not necessarily democratic. And talking about that requires far more nuance that most analysts have granted the company. Internally, the people in charge are far more deliberate than that. “1200 employees have signed a petition!” So… 0.6% of the company? Sure that committee is still horrifyingly American-centric; they have a diversity that would make a Yale-Harvard rowing derby blush; they are full of political appointees with problematic friends, but at least, unlike journalists, they get basics like: Censorship is bad. More importantly, there’s a really good team of researchers that try every thing that is suggested publicly and more. “What about if we told Flat-Earthers and Anti-Vaxxers that they are wrong, and kick them out?” well, there’s published science from Facebook about that. That science informs a lot of the key decisions. What happens when you link to those results in a debate outside the company? “Oh, it’s from Facebook, it must be biased…” Huh… no, it’s peer-reviewed and it has strikingly critical takes; it has detailed experimental protocols. You are welcome to engage with the authors. But most people outside of the company choose not to.

4. The alternative is far worst. Imagine you can help the company gain market share among teenagers. Every minute they spend on Facebook, they are not listening to Chinese propaganda on TikTok, or getting asked for nudes by creepy 45-years-olds on SnapChat. So, are the teams at Facebook working on detecting State-actors or groomers under-funded? Sure, but those teams _exist_; you can help them. Those teams don’t exist at alternative. So you are not necessarily defending the company with insufficient effort in key direction because of “the pay check” or “profit” but because, as insufficient as they may be, those efforts are there.

5. The vast majority of what Facebook is doing is neither controversial nor hard. It’s helping small businesses grow and survive the pandemic; helping families share a video feed for an anniversary; helping VR not trigger headaches after 20 minutes. Those are clear ethical goods (as well as profitable, which isn’t bad) and leaving the company means that those projects get delayed.

6. Employees feel heard on those questions. But the thing is: most of the time, the problem isn’t we need someone whose work and focus is unrelated to barge him and play the reply-guy. That’s occasionally helpful, but rarely. Recent examples: last year, employees from Hong-Kong have reached out to explain how the CCP was trying to detect and hurt them. Facebook has systems to protect dissidents. Knowing about them and spreading awareness locally; translating them to Cantonese was presumably helpful. But the policy team rarely needs petitions to be told that the NYTimes has published a scathing OpEd, or that racism is bad. Employees overall trust the management to do well enough; they share the same view of the world. They read the same opinion that are shared publicly (maybe more often) and agree with it. That’s probably the most striking difference: the same text (say, about how Apple’s changes to iOS will decimate small businesses) is taken at face value internally because the assumptions behind it are well documented and understood. So it’s incredibly confusing when the exact same document is immediately shot down externally, for being wrong (it’s not) or self-interested (sure but still true and important).

So employees disengage. Because it’s too frustrating, too hard and it generally leads to bad things. Not because of the confusion of being a bad guy (it’s far more confusing internally because all the remaining problems are hard) or because of a pay check (you get offers for more money every day in your LinkedIn inbox, let alone from writing a tell-all book).


Facebooks "contact import" feature had been running on (in the EU):

- it's the user who commits the crime

- it's a crime not pursed by the state without anyone suing

IMHO it's kinda ridiculous that this "gap in laws/regulations" hasn't been closed years ago.


Another day. Another Facebook fuckup.

When will this end?


When consequences get real. Real criminal and civil penalties like jail time, billions of dollars in fines, etc.



I love the NOYB project. Make sure to support them with a recurring annual donation if you aren't already.


No one really cares about this, in 3 months or less, it will be completely forgotten about and it's business as usual.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: