Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Current and former FB employees, what's the other side to the story?
108 points by adpirz on March 6, 2019 | hide | past | favorite | 129 comments
Another day, another story of Facebook's reach overextending, breaching privacy unnecessarily, and generally acting in some form of bad faith.

But business is clearly still good and it seems like they're still able to recruit plenty of talent. So what's the other side here? Internally, are there feelings of consternation about all the negative press or is there a bigger piece that is missed by it?

I'm really curious what the other side is here, especially given FB hasn't slowed down at all despite all the press.




Throwaway for obvious reasons. Opinions are my own.

I joined in the last year. I had major issues with FB before, reflecting typical HN user stance. I joined because I was curious and their promises sounded like an amazing place to work.

And it is. They treat us better than any other company in terms of autonomy and input. Everyone has a tea seat at the table.

And what I’ve seen, the external perceptions don’t match the internal objectives. I hear real change from leadership, and most of the major issues are years old.

Fb has invested hugely in protecting elections, possibly more than any single government. They have doubled that on integrity, etc.

And realistically, I don’t think we know how to balance harm and good in the real world. I just don’t see tracking online activity as more harmful than the benefit that people get from the service. (Not so much US users, but the benefits to people in very poor countries are very real, and wouldn’t exist if the service didn’t monetize so well)

In short I don’t see them as an evil entity. They’re just a large one that is easy pickings for negative press. I used to work for the US government in a health research role; this feels similar.

When something is so massive and decentralized, the “bad press” events are gonna happen. I think FB, and especially Zuckerberg, have done a great job responding to these issues to try and solve them.

Remember, what Facebook is doing has never been done before. There are going to be mistakes.


> I just don’t see tracking online activity as more harmful than the benefit that people get from the service. (Not so much US users, but the benefits to people in very poor countries are very real, and wouldn’t exist if the service didn’t monetize so well)

Can't you elaborate on this? What's the difference between people in the US and poor countries? Why does tracking help the latter group?


I think he's just saying that folks in developing countries get more value of the service (developed countries have much more competition), not that they benefit from tracking more.


This. Poor countries use Facebook for really valuable actions like talking to loved ones. In the states we have alternatives and can afford a few bucks a month for a service. But in poor countries, Facebook is often a lifeline (sometimes literally, in the case of crises)


You hear a lot of "STOP SHOWING ME ADS ON _____. I'LL PAY FOR THIS. STOP TRACKING ME. I'D LOVE A PAID VERSION OF ______"

And every single time I laugh at how much those statements reek of privilege. It's always from people who live in the biggest, richest cities of the country and are usually white+young+have steady incomes.

If Facebook started offering ad-free versions, within a year it would become a website only for the SF/NY/London elite kids.

Imagine if these services were paid: Facebook = $40/month. Whatsapp = $30/mo, Instagram = $30/mo. Youtube = $30/mo. Google Search = $50/mo. Google Maps = $30/mo.

Sure, you're probably rich enough that you can afford a crazy amount to subscribe to every possible website. But the majority of the world cannot - and that's why ads are not just an unnecessary distraction but instead a necessary way to life on the Internet.


> Sure, you're probably rich enough that you can afford a crazy amount to subscribe to every possible website. But the majority of the world cannot - and that's why ads are not just an unnecessary distraction but instead a necessary way to life on the Internet.

You seem to be assuming that "life on the Internet" requires these services, and requires them to bring in the insane revenues that they bring in through targeted advertising. It doesn't. The Internet and the Web had thriving communities before any of them existed, and there are meaningful alternatives to everything you mentioned that don't surveil their users. The only thing that makes any of these services feel required is network effects and the attendant monopoly power. People use them instead of the alternatives because they're "free", and that's where their friends are and where the content is. They actively enforce this power, by buying competitors, investing in slicker UIs, preventing other services from providing a different interface to the same data, and not providing any meaningful way for users to export their data and delete it from the company's servers. Wanting an alternative to this situation needn't be an expression of privilege.


I'll add that as soon as you charge more than zero you'll lose revenue to some VC-funded copycat with no hope or intent of ever becoming profitable before the early stakeholders sell the worthless husk to some sucker. Those are the "ethical standards" so many here are fighting for.


Let me disagree on this.

I think You joined Facebook, and they showed you the "Employee side" where everything is beautiful, with free food and free toys and you became convinced. Yes, they treat employees super well, nobody contradicts that. But how does it change anything related to how bad the product is (and all the bad things it does to people that use it a lot).

It is easy to let the beautiful inner side of Facebook take over the product side of Facebook that everyone else is seeing from the outside.


I get that you have a fundamental disagreement with the opinions expressed by this commenter, but please don’t invalidate their experience like that. Saying, “Sorry but no” is a flippant way of dismissing their experience without actually being sorry for doing it.

It’s frankly not for you to say “what happened” when it comes to their opinion, which is based on what they’ve witnessed and experienced. Note that this entire thread does not concern facts, it concerns impressions and opinions. This isn’t the place to “Actually, ...” what someone else has said.

It strikes me as particularly condescending that you’re telling this person they only feel this way because of “free food and free toys.”


You make a good point and I edited my comment. Thanks.


> But how does it change anything related to how bad the product is

From OP:

> I hear real change from leadership

> Fb has invested hugely in protecting elections, possibly more than any single government. They have doubled that on integrity, etc.

> I think FB, and especially Zuckerberg, have done a great job responding to these issues to try and solve them.

You might think they're misinformed or incorrect, but you can't say they didn't address product/company issues.


I don’t think the product is bad, nor do I think it’s evil. The people who decide to use the service every day get value.

I think the negative fb news of late has really enabled people to strengthen their confirmation bias against fb.

For every super negative story that HN latches onto, there are a dozen internal projects that help the world in some way.

I guess I just see the intentions and a broader Perspective in some ways. When I saw how the sausage was made, I was actually impressed with the earnestness of peoples intentions.

Put simply, I don’t think anyone on HN has enough facts and measurements of outcomes to judge FB.


"how bad the product is"

What makes facebook so much worse than email? The fact that its default sorting algorithm is based on how relevant it thinks stories are to your interests? Any spread of false information could happen basically as easily through gmail. I dont see people getting mad at gmail and outlook for allowing chain letters to keep existing.


The utility_function of facebook (what it maximizes for) is for your attention grab. This in itself makes the product bad. They will hack the endorphin in your brain in order to keep you hooked.

the utility_function of email is utilitarian. No email server wants to keep their user hooked on their emails.


How are you measuring this utility function? What is the evidence that facebooks objective is to hack endorphins?

All the conversations I’ve heard discuss how to make products better for users. Even if it hurts standard metrics.

For example, the goal of the feed team hasn’t been to increase engagement for a couple of years now. The metric of choice is meaningful interactions.

The outsiders perspective of FB seems to be a couple of years out of date, compared to what I’ve seen.


I mean, those notifications everyone gets that have no actual pertinent info/are super blatant engagement hacks have been widely discusssed/experienced, and there's no strong argument that those make products better for users (and it's obvious what metrics they're attempting to inflate).


This sums up the advertising industry as a whole.


I wish the person who replied didnt delete their comment. They asked you if you felt the same way about netflix. Are products who want to keep people hooked, and have all their attention universally bad?


Obviously so. If you're into music/movies/arts you should come up with a crappier product. I hope you don't come up with tunes that are addictive and get people hooked to your music.

Same with sports teams: I hope they all play crappy so that people are not hooked onto them.

/s


I replied on another thread on this topic where the same question got raised. (TLDR: Not perfect, but definitely a magnitude better).


This argument sounds a lot like gaslighting. You can put this argument for any unethical company and it would work.


Can we please stop using "gaslighting" as shorthand for "an opinion or perspective expressed by a single user that I disagree with?" I've seen it in several comment threads here in the past few weeks and it's getting really annoying.


I think you could also identify similar negative consequences from any entity the size of Facebook. I think it’s a side effect of being big, and fb is no different than any of the F1000 or just about any government on the planet.

What I do see internally — executives down through rank and file acknowledge the problems and work really hard to address them. I haven’t seen the same level in introspection elsewhere, even when I worked for the DoD.


> This argument sounds a lot like gaslighting.

That's just a matter of whether you think OP truly believes what they said.


"It's not a lie, if you believe it!" - George Costanza


Indeed. You can be wrong without lying.


How is this gaslighting?


I worked at FB for three years as a software engineer. The people I worked with basically believed:

1) The core product is good and useful to people/society

2) Most of the negative articles about Facebook are based on some piece of truth but go out of their way to make things seem worse and more sinister than they really are

3) Zuck generally wants to do the right thing, but of course makes mistakes

4) Yes the company does bad things sometimes, but not significantly worse than Google or Amazon (which are the main places many FB employees would consider working if they left)


And what did you believe, or currently do?


Basically the same. My reasons for leaving were not related to company/product ethics.


[flagged]


What's the relation?


"3) Zuck generally wants to do the right thing, but of course makes mistakes"

Yeah, he does not mistakes like openly calling his users dumb fxxxx anymore.

https://www.businessinsider.com/well-these-new-zuckerberg-im...

But I have zero indications to believe, that he changed his believes in any way. Sad truth is, that he might be right. Mainstream humanity seems pretty stupid, to trust all their sensitive data to strangers who apparently have no respect for them. Does not make Zuckerberg a nice person, though.


I disagree with all of those, but more specifically with

>> 1) The core product is good and useful to people/society.

Definitely not. If you had to put a numerical value on Facebook impact on society, it would be extremely negative I think. You grab attention of weak users, that end up spending so much time on your platform doing nothing instead of contributing in things that would benefit society. You create an illusion that everyone got a beautiful life, pushing some people on the verge to suicide//depression. And maybe the worst of all, you block free speech based on whatever private criterias you decide and you allow for mass brain washing by pushing ideas on your newsfeed. The only positive side is your utility of connecting people which already perfectly existed before.

The only reason why you get away with all of this is because it is so difficult to put an actual number on those.


As I said in a previous comment, I'd be curious if this "wasting time instead of contributing to society" applies also to other entertainment companies that quite literally waste thousands of hours a year of the average weak person, such as Netflix, which instead is typically seen as providing very positive value to society.


Yeah, it kind of implies that you should spend 100% of your waking time being "productive", which is not realistic. Everybody needs time to recharge, and depending on your personality, that can be anything from going to a bar with friends, to playing recreational sports, to watching TV, to scrolling through the internet.


Yeah. I ask this because I want to be able to understand the personality of whomever is writing these comments, before deciding if I should consider or simply dismiss them.

If the person will just say "year, clearly Netflix is just as negative for society", then that will be enough for me to completely dismiss this person's opinions on the entire matter, since Netflix, just as Facebook, has provided me with massive positive value, even if, at times, it made me jealous of seeing the life of other people as presented in some TV shows, again just as Facebook.


While I also wouldn't put Netflix on a pedestal, I think it is in a very different category than Facebook:

Netflix, you indeed lose your time but you don't have that feeling that everyone else is having a wonderful time while you are losing yours on the couch. Netflix (as TV) provides some healthy entertainment.

Facebook on the other hand provides entertainment by broadcasting how perfect other people's life are. And this is exactly what pushes people to depression and suicide. Ask people you know and most of them will admit that after their daily visit on Facebook they feel a bit burned-out. This effect is also magnified because you now always keep in mind that you should post a "perfect picture" on Instagram to please your followers.


This seems like a stretch to me and certainly diminishes the value of your opening original comment "... that end up spending so much time on your platform doing nothing instead of contributing in things that would benefit society".

Personally, I've just as well felt a bit burned-out by watching very popular TV shows on Netflix portraying the amazing lives of some people as I have with Facebook, and I would never say Netflix has caused negative value to my life, just as I wouldn't say that with Facebook, which in my personal experience has been one of the very best products of this decade.

Certainly agree to disagree.


I can see your point and definitely agree to disagree.

I think the perverse effect of Facebook are not fully well understood yet. But I'm fairly confident we will look back at this as something terrible. Facebook and Instagram also indirectly created the selfie generation, more self-centered than ever with the goal of getting more and more likes. I don't see that happening with TV.

And then there is the whole fee-speech//mass propaganda tool// privacy debate that I'm not even getting into.


> Netflix (as TV) provides some healthy entertainment.

It's far from obvious that TV is healthy. This article has some discussion of one potential downside and a rebuttal: https://www.scientificamerican.com/article/does-tv-rot-your-...


With simple logic: If TV is not Healthy, then Facebook is definitely worse as it mixes TV and broadcasts specifically the fake perfect life of people you know.


> The only reason why you get away with all of this is because it is so difficult to put an actual number on those.

It's equally difficult to put numbers on the positive effects, which can be anything from enjoying a joke to organizing a protest of an oppressive government to preventing a suicide.


Or simply live casting the torture of a mentally challenged individual.


Would you apply that reasoning to watching porn, reading Harry Potter novels, bbq-ing etc?

Arguably, none of those are "contributing in things that would benefit society".


Every generation has its own toxic time wasters. Things that people aren't willing to pay for, and wished they used less off. But the alternative is doing nothing, which can actually be painful to some.

It used to be TV, chat, radio. Netflix actually brings a lot to the table, in that people appreciate it enough to pay for it.


Disclaimer: I work for a competitor of FB. Opinions are my own, etc.

There is an aspect of https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect when hearing stories about FB. I've heard similar stories about various tech companies from major press outlets and the facts & opinions in the stories directly contradict my own first hand knowledge.

Given this data, why should I trust press articles about FB when they're written by the same journalists who are focused on maximizing revenue for their own companies? Most articles I see are ~10% cited sources and ~90% speculation.

Two examples of press stories that most technical people know are false are https://en.wikipedia.org/wiki/Satoshi_Nakamoto#Dorian_Nakamo... and https://en.wikipedia.org/wiki/Supermicro#Allegations_of_comp... . People with technical knowledge can evaluate those stories and realize that they're incorrect, but others without technical knowledge probably believe the newspapers. Similarly, people at FB with insider knowledge probably realize that most of the press articles about FB are 90% incorrect and that FB is in the same moral league as all the other major tech companies.


> they're written by the same journalists who are focused on maximizing revenue for their own companies?

That's a pretty huge accusation to casually throw out. Where is the actual evidence that a journalist writing an article critical of Facebook is doing so because they are focused on maximising revenue for their own company? Most journalists have nothing to do with the business side of their company, and this feels like a way to shut down absolutely any critical article without further analysis.

Further to that, the two articles you cite have nothing to do with Facebook. I'd be interested to know what technical knowledge disproves the stories about Cambridge Analytica or genocide in Myanmar:

https://www.vox.com/policy-and-politics/2018/3/23/17151916/f...

https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...

I find the genocide case particularly interesting to talk about because it doesn't really require much technical knowledge to understand at all - the military set up fake FB profiles in order to push propaganda and promote ethnic cleansing. It's a question of Facebook's role in society and how proactive they should or should not be about moderating the content transmitted through their platform. I don't think there's a simple right or wrong answer, and it feels like a shame to dismiss the idea entirely because the article was written by a newspaper that technically competes with Facebook for ad impressions, or to dismiss it because of an assumption that the story is false because other false stories exist in the world.


My grandmother had her business destroyed by false newspaper stories paid for by a competitor. Till the day she died she wouldn’t touch a newspaper.

I’ve never been the target of a journalist before, but many who have seem to find their perspective profoundly changed by the experience.


> Most journalists have nothing to do with the business side of their company

Is that really true? It's the business side that pays their salaries, after all. Most people, in any business, are aware of where their employer's revenue comes from, who their competitors are, what threats they face, etc. This awareness is often heightened after being caught in a layoff, or working with others who have been. The news business has been pretty rough for a while so I'm pretty sure every working journalist has a pretty clear idea about these things.

There doesn't have to be a conspiracy here, and inferring one is a bit of a strawman. All that's necessary is plain old human cognitive bias. Availability bias: journalists are hyper-aware of Facebook because of how it relates to their lives. Confirmation bias: most journalists are predisposed to think ill of Facebook. These effects become even stronger when the journalists are all validating and reinforcing each other when they give negative stories about Facebook more prominence than they rationally deserve. Such stories are routinely repeated and exaggerated far more than other objectively similar stories that don't play into the same biases - not because of specific intent but just because journalists are humans.

Facebook is not the first company that has gone from media darling to media scapegoat. It won't be the last. Look at the extremely positive and extremely negative stories about any of Elon Musk's ventures. Or any celebrity. Anybody who denies that journalists play favorites isn't being very realistic.


The majority of people I've talked to about Cambridge Analytica think that Facebook just sent out a big pile of user data without asking anyone's permission. Which is a natural and reasonable interpretation of summaries like the one Vox provides in that article:

> Facebook exposed data on up to 87 million Facebook users to a researcher who worked at Cambridge Analytica, which worked for the Trump campaign.

What actually happened is that Facebook allowed a researcher to ask individual users to volunteer their data, and that researcher gave the data to Cambridge Analytica even though he promised he'd only use it for his research. There are ways Facebook could have prevented this, and I agree they should have - the UX implied a level of trust that Facebook wasn't properly vetting. But the popular narrative is almost entirely false.


But... Facebook did expose that data, by allowing a researcher to gather data on 87 million users with no actual restrictions on what he could do with that data. The root cause here is Facebook. It feels like arguing semantics to say the narrative is "almost entirely false" because Facebook exposed that data to an individual who later exposed it to Cambridge Analytica. Arguably Facebook exposed data on far more than 87 million Facebook users - it exposed data on every single user it has.


To be clear, the scenario was this. Facebook allowed people to voluntarily share their data with a large number of third-party apps, including one called "This Is Your Digital Life". This was a popular feature; lots of people wanted to share their data in order to get quiz results and such. The guy who created "This Is Your Digital Life" said the data was only for his academic research, but it was secretly for Cambridge Analytica.

When the news reports that some company has exposed data, that normally means they set an S3 bucket to public or got hacked. I don't think many people would naturally describe the above scenario, where I voluntarily handed my Facebook data to a third party who lied about who they were, as "Facebook exposed your user data".


Some clarifications:

- Facebook allowed people to share their data and the data of their friends with third parties. So there was no requirement for me to use My Digital Life, or to accept a single permission prompt, in order for the researcher to have access to my data.

- people didn't "want to share their data", they wanted their quiz results. Was every bit of the data they provided required in order to get the quiz results? Doubtful. Were users aware that this data could be permanently stored outside of Facebook and used at a later point? Again, doubtful.

> that normally means they set an S3 bucket to public or got hacked

I'd argue that the combination of the above means that what Facebook did absolutely was equivalent to leaving an S3 bucket on public. Without absolutely any involvement by me, the researcher was able to access my personal data, and was given permission by someone whose understanding of what they were doing was, at best, dubious. Facebook knew all of this because they created the system it lived in, but didn't appear to consider the consequences, so yeah, I consider them responsible.


But this is similar to any phone/email/contact book app will allow you to upload or manage your address book. Your friends could leak your email address or phone number by installing a malicious app and granting the permission, do you blame Google/Apple for this?


For one, Google and Apple would remove an app from the App/Play Store if it was abusing those privileges. My Digital Life would never have passed App Store approval in the first place, since it in no way demonstrated why it needed the amount of personal information it requested. Facebook clearly had no such monitoring.

Not to mention the scale of data is radically different. I'm not over the moon about someone uploading my phone number to an unknown service, but it's mild compared to someone uploading my phone number, friend network, wall posts, photos shared, likes, etc. etc. etc.


I'm not so sure that it wouldn't have passed AppStore review back in 2011, this was way before most privacy crackdowns began. It is also the case that the app claimed that this data would be used to analyze "your digital life" AND that it would be collected for research purposes only. Play Store definitely would've let it through.

As for the quality of the data. Right now any of your friends could have apps installed that are secretly leaking your private conversations with a third party and there is no realistic way for Apple or Google to know. That information seems more sensitive than what you linked. Why are you setting the bar higher for Facebook?


> Why are you setting the bar higher for Facebook?

The difference is this:

> there is no realistic way for Apple or Google to know.

There was a way for Facebook to know. The tool was using their own API, so they knew exactly what was being fetched and how often.

I'm also not sure why it matters. Let's say Apple and Google are just as bad, I don't particularly care. It doesn't make what Facebook did any less worthy of scrutiny.


>Where is the actual evidence that a journalist writing an article critical of COMPANY is doing so because they are focused on maximising revenue for their own company?

recent example is bloomberg and their china chip hacking (did that ever turn out to be true?) https://www.bloomberg.com/news/features/2018-10-04/the-big-h...

where the writers get compensated if their story moves stock prices https://www.businessinsider.com/bloomberg-reporters-compensa...


Well, you removed the word "Facebook" from the quote there, and the Business Insider article itself states that the practice is extremely unusual in the industry.

I'll agree that if Bloomberg was the sole reporter of a controversial story about Facebook that couldn't be proven I'd be very suspicious. But as far as I can tell, that hasn't happened.


First, I don't think moving the price of an OTCBB stock counts.

Second, there is still the issue of accuracy, and that is even mentioned in the BI article. Bloomberg is not going to reward reporters for publishing bullshit, because it hurts the credibility of the terminal as a news source.

Third, while I wasn't in news while I was there I'm not aware of such a policy existing, and can't find anything more recent than that 2013 article.


This seems like really flimsy reasoning. Sometimes news stories are wrong, therefore 90% of what you hear about Facebook is false?


More like "the explicit fact laid out in the article is true, but its context, meaning, effect on the world, and interpretation are so sensationalized that the original fact becomes irrelevant to the narrative."

EG back in the day when the entire world lost its shit over the permissions an android app was asking for, not really understanding how a permission system for access to api's work. They seem to think they are legal permissions not technical ones.


>They seem to think they are legal permissions not technical ones.

There has to be either oversight or trust for legal permission to be distinct from technical permission. I think most people feel there is little of either in tech, and I can't blame them. Trust is given out as a social favor, tech companies have systematically abused this social favor by extracting extreme levels of data from every interaction. When I talk to someone face to face, I don't expect them to whip out a video camera and start recording our interaction, and if they did that I wouldn't approach them with so much trust. If a tech company Hoovers up my personal data, I likely won't know it. The abuse of trust is the over-collection of data and that particular abuse is what funds the internet. If you are arguing that there is oversight on the internet, consider that Facebook still won't let you really delete your data.


Most of what the press reports is a distortion of the truth, or making something sound worse than it really is.

Sure there are issues but all companies have them, just like Google, Amazon, etc. There are a lot of good people doing good work here, and connecting the world for the better. You don't hear about the 99.99% of the good things because that won't encourage clicks for the news rags.

If the press took an interest in your company, or even you personally, no one would look good from their constant digging to make you look as bad as possible. Imagine if your worst moments in your life were publicized for everyone to see, without seeing all the good things you've done? People would think you're an asshole as well.


This should be the top comment.

Let's take a current headline: "Facebook is facing criticism for not allowing users to opt out of a feature that lets people look them up using their phone number or email address."

There are so many ways to explain how this could come to be and the vast majority of those would not be attributed to malicious intent.

It's not like PMs at Facebook are sitting around wondering how they can exploit people's privacy. They work on a massively complex product which sits on extremely sensitive data. Releasing a feature without properly vetting all of the ramifications (some of which exist completely outside of your purview or knowledge) is the reality.

Could Facebook have better cultural practices within the company when it comes to privacy? Sure. But that's a far leap from assuming that workers there are intentionally trying to exploit people's privacy.

I'm by no means absolving Facebook of the responsibility they have with regard to individual and global impact. But I imagine Facebook employees are aware of the amazing benefit Facebook has provided as much as they are the negative impact it's had. It's not hard to see how many people would believe the good outweighs the bad.


I have huge huge grips with tech news reporting. NYT/WaPo are reasonably decent usually. You should look at how unbalanced recent tech articles are:

Look at this horrendous article by Josh Constine: https://techcrunch.com/2019/02/28/facebook-research-teens/

> "So 18 percent of research testers were teens. It was only less than 5 percent when Facebook got caught. "

Very very colloquial term plus hugely biased+loaded terms like 'caught' are being used. Even basic Journalistic standards are not being met.

This isn't a new article. That's an opinion piece (and I am not disagreeing with his facts but with how opinionated his presentation is).

There's a difference between "GSW lose to Celtics 129-95" and "Low energy GSW show disinterest in the game and get walloped by Celtics 129-95".

And Josh is one of the more popular tech 'journalists'. So you can imagine how problematic the whole industry is. There's no "news". There's just opinion pieces - all of which are from a single elite SF/NY/LA bubble


The latest from Josh Constine: https://techcrunch.com/2019/03/06/facebook-living-room/

"Perhaps this will just be more lip service in a time of PR crisis for Facebook."

"Now Facebook might finally see the dollar signs within privacy."

Shamefully opinionated comments again from Josh.


>It's not like PMs at Facebook are sitting around wondering how they can exploit people's privacy.

I'd bet serious money there are. They won't let themselves think of it in terms "exploiting privacy", but use some friendly-sounding framing like "leveraging data collection" which is still inherently inseparable from the exploitation of privacy.


Meh .... the same could be said of virtually every job. You'd be hard pressed to find any company (including yours, or your clients if you freelance) who have squeaky clean operations.

By that example do you think everyone working at Nike are using friendly-sounding framing like "making high performance shoes affordable at scale" when really what they're doing is exploiting kids in developing countries?

There is a cost of doing things at scale and living a modern day life where you don't directly benefit from that is virtually impossible.


This may be true but its very easy to argue that an organization with the global influence of Facebook deserves that kind of scrutiny. The average Joe may look like an asshole under the same lens, but I would also say the potential effects of his assholery are generally far more limited by comparison.


Never said that Facebook doesn't deserve scrutiny. But it should be fair scrutiny, not unfair, biased scrutiny that gets incentivized to only show the negative.

I think that if the scrutiny were fair, perceptions would be drastically different and much more favorable (and the news reports would be more boring). Yes, people make mistakes. But there's no global conspiracy, and Facebook is operating on a level that no other company has ever done before so for the most part we are learning as we go along as well.


It is interesting how so many people (here and elsewhere) love to point out how Facebook loves controversy because it drives engagement, then immediately turn around and hotly deny that journalists also love controversy because it drives engagement. :eyeroll:


> There are a lot of good people doing good work here, and connecting the world for the better.

What does that actually mean? This sounds like a stock phrase from a press release.

At its best, I guess you could say Facebook helps you kinda-sorta keep in touch with old friends you don't see that often. But I somehow suspect that's not what the majority of FB engineers are actually working on.

Your point about the press focusing on the negative is valid, but I have to say the lowlights for FB are a lot worse than anything I've worked on in my career.


There are millions of people in various Facebook groups that are connected, based on things like illnesses, concerns, etc. The ability to share information and experiences and support each other are unparalleled.

Just because you only use Facebook to connect with friends doesn't mean there are billions of people using it for other extremely beneficial purposes.


There were plenty of internet forums and even IRC and Usenet before Facebook. Maybe FB has pushed out other, better, places for people to gather and communicated?


There is a lot of value in a marketplace for discovery and consistent interface. Let us not echo the sentiment of the infamous poster suggesting dropbox would fail because "who wouldn't just spin up an ftp server backed by source control".

From the stance of democratizing the internet, for better or worse, I don't think we can sit here and deny the formative role Facebook played in getting people shut out from the net involved with it.


I guess I would argue that Facebook is not "the net" -- it's AOL 2.0. You're probably right that has value of its own, but it's not the same value as an open internet, or the promise of what was once the worldwide web.


Backdoors in your cell phone OS can be used to catch criminals. They can also be used to spy on innocents. Sometimes the cost isn't worth it.


> At its best, I guess you could say Facebook helps you kinda-sorta keep in touch with old friends you don't see that often.

It's more than that. At its best (your phrase) FB helps marginalized people or those in rare circumstances (e.g. health-related) find others with similar experiences who can offer advice and support. It has helped people reach out to one another during emergencies. Yes, those are extreme cases, but you used the phrase "at its best" so let's not understate what the best is.

> somehow suspect that's not what the majority of FB engineers are actually working on.

Of course it isn't. Like at every other tech company, the engineers are mostly working on their own piece of technical infrastructure. They're not directly out there saving lives. Nor are they directly out there ruining any. Is it fair or useful to blame individuals at Facebook for the worst of what their company does, and simultaneously deny them any credit for the best, with no idea what role they actually play or what direction they're trying to push the company in?

Does anyone here suppose that oppressive governments around the world aren't also using RHEL and Hadoop to target their victims? Dell servers? Cisco switches? Who provides their cars, their guns, their bone saws? There are many companies at least as complicit in all of these awful acts, so why is the tsunami of self-righteousness only directed at one? Are people motivated by a desire for justice, or a desire for the approbation of others? Dopamine addiction isn't limited to Facebook users.


> Sure there are issues but all companies have them, just like Google, Amazon, etc.

Ahhh, what-aboutism I love it.

> If the press took an interest in your company, or even you personally, no one would look good from their constant digging to make you look as bad as possible.

Unless they straight lied they'd find a middle age developer working for a small/medium sized manufacturing company that treats it's staff and customers well.

Facebook operates at a scale where even if it has good intentions it's outcomes can be the same as someone intent on doing negative things, it's the nature of the beast.


It does feel like a lot of the news are misrepresented or skewed. Take CA as an example. This was FB acting as a developer portal and the following happened (to the best of my knowledge) using Apple as a substitute.

1. Your friend downloads a slick email app from company X on the AppStore

2. The OS warns you with a big blocking screen that this app wants to read all emails from your address book, your friend clicks allow and company X uploads your friend's address book to their server for what they claim are some cool autocomplete and search features

3. Apple has also made company X to sign an agreement to not misuse the data in any way outside of what's needed for "core functionality"

Later it turns out that company X took the data from their servers and sold it to someone else which means that your email address and any emails your sent to your friend are now leaked.

Now you can be mad at your friend, company X, or Apple and I feel like so much of the focus has been on "Apple" ie FB when the main piece of criticism is that perhaps users shouldn't be trusted with a permission this broad (which had already been fixed 4 years earlier).


Exactly. FB has done some crappy things (changing default without clear communication of consequences being a major one http://mattmckeon.com/facebook-privacy/) that LEAD to CA being a problem. Im willing to bet 99% of people dont realize that WHEN they signed up for facebook changed the default settings, and that their defaults might be different than their friends. And letting developers have access to "public data" can be debated, especially when they are playing so fast and loose with what gets posted publicly by default.

BUT, the developer itself is the one that got people to fork over their friends info, and having friends that will give up their friends privacy to play a personality quiz speaks way more to the public than it does to facebook.


> the main piece of them criticism is that perhaps users shouldn't be trusted with a permission this broad (which had already been fixed 4 years earlier).

Agreed. This broad permission was also widely known well before CA. IMO the only reason the CA story blew up is that lots of people were upset about the 2016 election and wanted somebody to target their anger toward.


Traditional news media gave the election to Hillary. And post-election they found it convenient to blame it all on Facebook.

Facebook certainly isn't influential enough to swing 20 points of an entire US Prez election all on its own (It obviously contribute though)


"I'm really curious what the other side is here, especially given FB hasn't slowed down at all despite all the press."

Shouldn't this whole question be directed at the users of FB instead? If they don't care enough about this to stop using FB, why would you expect the developers to care enough to give up part of their fat paycheck?

I personally refuse to work for companies that I think are likely to do business in a way that I think is unethical. But my ethics aren't the same as everyone else's, and I can perfectly well understand not caring about most of the "privacy" scandals that Facebook has if the users don't care.

Everyone is entitled to give up their privacy whenever they want, and everyone should be aware at this point that Facebook plays fast and loose with that privacy. If they continue to post on FB after this, I can't really feel sorry for them.

And for the record, I still use Facebook as much as usual. That isn't a lot, but it's some. I post only to my friends, and I only post things that I'd be willing to post on a public forum anyhow. Like any other tool, you have to be careful while using it.


The other side is 90th percentile market pay...?

At a certain point in a company's lifecycle, most employees shift from being intrinsically motivated (by the mission, the team, the leadership) to extrinsically motivated (by the money, benefits, and resume stamp). FB is no different, and that shift happened way before 2017. I would guess most people who happily work at FB did not care before Cambridge Analytica and do not post Cambridge Analytica.

If anything, I would guess FB employees are feeling relatively better now given the broad criticism on many tech giants (Amazon with cities/labor issues; Google with gov't censorship; Amazon/Google/MSFT with DoD controversies). The main thing I think going for FB is that they don't (yet) compete for any military business, so no one's accusing them of helping kill people at least.


I don't see the complexity here. They pay competitive salaries and people don't give a fuck about what FB does and accept the gig. Simple.

Being able to say: "No, I don't work here because of principles" is something really nice but not everyone can afford to do.


> Being able to say: "No, I don't work here because of principles" is something really nice but not everyone can afford to do.

I'd question that. FB undoubtedly pays very well and probably better than almost all competitors, but at the same time most FB engineers could almost certainly take work somewhere else with a slightly lower (but objectively still fantastic) salary and still continue to live a very comfortable life.

I'd be interested to hear other reasons why people continue to work at FB, but "I can't afford not to" feels dubious to me.


Getting paid is a pretty powerful motivator. Maybe you can afford not to, maybe others don’t want to afford not to. Salaries are good, the experience is good, job hunting sucks.


Maybe we should stop considering the experience "good"?

I know I'd look slightly askance at a resume that was full of tobacco companies. Maybe we should feel the same way about surveillance-tech companies?


> I'd question that. FB undoubtedly pays very well and probably better than almost all competitors, but at the same time most FB engineers could almost certainly take work somewhere else with a slightly lower (but objectively still fantastic) salary and still continue to live a very comfortable life.

I think what you're saying is true for people who already work at FB, but I think you underestimate how much of a boost working at FB is for people who are deciding whether to take an offer. Having dropped out of school and only worked at unknown startups, I had previously found it very hard to even get interviews but after I worked at AWS suddenly people were interested in hiring me. I am really grateful that I had the opportunity to work there.

When I was at AWS I often asked people how they felt about working there and by far the most common response was it wasn't the greatest but it would really help their resumes. Obviously anecdotal but it leads me to think my situation was not uncommon.


Teams like the React team get paid well, are "separated" from the actual product, and work on something that means a lot to them.


I have personally told them this, and my rationalization for "closing the door" so to speak is that there is (hopefully) no future reality in which Facebook is the only place that I am getting an employment offer that I'd accept. If I can get an offer from Facebook, I can almost certainly get an offer somewhere else.


99% of their engineers could afford to do the right thing. They have plenty of alternatives that pay very well, but they'd rather sell their soul for a few more luxuries.

They are choosing to be bad people.


You're discovering the dirty secret that a lot of people generally don't care about doing the right thing when doing the wrong thing is so lucrative. People will talk a big game but shove a pile of cash in their face and things get complicated fast.


Capitalism 101


"Bad people" feels a little trite. What about individuals who feel they are truly joining to try to make a difference for the better in improving the culture, security, data privacy, or overall business? Too altruistic or is that part of the story?


I like your optimism, but, again with all the bad PR, how could someone join FB and believe they are improving the culture? Unless they want to join to try to change it from the inside?


I don't work at FB but I don't agree with this.

Is it really better in the long run that people who disagree with privacy intrusions leave? I'd argue that it's better they stay and are part of the conversation to improve things.

If all that's left at FB are people who are solely motivated my greed, and I am sure there are certainly enough of those people to run FB, isn't this only worse for the users?


This view seems somewhat reductionist, no? I don't work at FB and never have, but I could easily see a case for working on projects that contribute back to the developer community (e.g. React). These wouldn't be directly related to user tracking/privacy violations and they might be selling their souls, but are contributing something useful to the world. In this case, what do you think would step up to fill the void in OSS, and what do you think is the best move for one of those developers?


this is right. to think you have no choice means you are part of the problem


You are showing your privilege.


You are showing your entitlement.


I work at Facebook. I may have the opportunity to work on a privacy-related product which, if implemented, would have prevented at least one recent headline. I'd like to get people's thoughts: would working at Facebook on such a product be ethically-permissible? Does working at Facebook in a capacity that improves privacy for its users absolve me of that sin by association? What if I only work on such a product for some of my tenure?


Current Facebook employee. I'll try to keep this short.

Some of the things Facebook has done have made me sad, or angry. I do wish we could evolve toward that perfect balance of privacy, authenticity, connectedness, etc. more quickly. OTOH, I do see a lot of good people around me, many who share those concerns, and I do see progress being made, and I know none of us will make a damn bit of difference from the outside.


I don't work for FB but in my experience, such decisions are most often times are taken by Product Managers who do not understand privacy implications. They typically try to bypass security/privacy teams to avoid the hassle of dealing with them. Also, Engineering teams are sometimes siloed and do not realize that the data they are messing with is sensitive in nature (MFA phone numbers in most recent case.) Most of these features aren't designed with explicit malice, but usually to further a goal of linking more data, generating more insights, showing better ads and so on and thats what interests most engineers.

In many cases of-course, they have tend to hide behind user consent - "The user consented to this very broad privacy policy which allows us to do whatever we want so it's okay." Having said that, FB did hire a bunch of privacy experts to fix institutional issues.


> They typically try to bypass security/privacy teams to avoid the hassle of dealing with them.

The PMs I worked with at FB definitely wouldn't do that. Having those teams find a problem after you've built a feature is much more of a hassle than getting their help to do it right in the first place.

On the other hand, product teams would often fail to get those reviews just because they didn't realize there could be a problem.


Because there is no negative consequences for anything as long as you meet your deadlines.

And you are fed constant propaganda about how you are making the world a better place by connecting people and freeing speech. You start to believe it.

And when you bring up ideas which actually make it hard to spread false information but might reduce engagement metrics, then you are laughed out of the room. Seriously.

Until Facebook's stock price is affected by actual lawsuits, no one inside Facebook will ever be fired for generating negative news.


As a person who has never had a Facebook account, can someone explain what societal good they've done? I see none.

I see espionage on a massive scale. I see democracies weakened for profit. I see self-obsessed shallow folks constantly taking selfies to post online. I see the degeneration of political discourse into pithy statements that can be 'liked'.

What good have they done, exactly? Helped people share photos and status updates?


It's a tool right, and from a usability, stability, security, and functionality perspective its second to none. No other program (for me) works more seamlessly between desktop and mobile.

It's messaging app is first class. Tons of features. I can type a message on a pc, and carry my phone with me and pick up where I left off.

It's groups allow people to self organize, at a scale I dont see anywhere else.

It's newsfeed is what you make of it. If you follow your friends you get what your friends share. If you follow ars technica, foreign policy, indewire, mit, open culture, priceonomics, saveur, you get ars technica, foreign policy, indewire, mit, open culture, priceonomics, saveur. Theres really no one to blame but yourself and the people you follow if your newsfeed sucks, or isnt what you want it to be. Follow better people. Fake news, tribalism, viral videos as the new opiate/soma of the masses, speaks to human nature and what we have become, not the tool that lets us mindlessly consume and push away our problems.

It's gmail, google news, twitter, instagram, youtube, reddit, all rolled into one. For a lot of people who want a stable product that works well, that does those things, it gets the job done. I totally think facebook wields its power irresponsibly at times, but blaming it alone for weakening democracy is like blaming roads for bad drivers.


Thanks for the critique of Facebook as a technical implementation. I was curious as to what societal good the company and its products have done.

It sounds like "saved people a bit of time by doing all the things separate services used to do".


What about the millions and millions of small businesses across the globe that were able to grow, reach their customers beyond the borders of their cities, states, or even countries, create jobs, create stability, and bring goods and services to the people that want or need them.

What about the charitable giving products that Facebook builds that has collected billions of dollars for charities globally?

What about the first of it's kind Crisis Response tools that allows people involved in natural disasters, terrorist attacks, or other horrible situations communicate with their family and friends across the globe and let them know they're safe - for free.

What about the groups functionality that allows people come together to support one another through cancer diagnosis, or the loss of a loved one (grief counseling), or coach and help one another land jobs.

I could go on for a little while longer, but the point is the facebook isn't the only company or service that can do the things I've listed above, but it's the only one that does all of them and doesn't charge the end user. Let's not kid ourselves, building those products and the infrastructure that it rests upon is not free nor easy and requires tremendous talent and resources.

Facebook is a business like any other and it has to make money, and it does so through ads. So to judge it purely on your experience/use case that may not include any of the products/features I mentioned above, it may seem like it doesn't produce a net positive result.

But for those users that have and continue to use facebook in these ways, the benefits are obvious.


Basically. That and "an archive of my history," a data repository of photos and contacts. Some of the "good" is people finally feeling like the found a terminal place to store things without it disappearing.

Facebook is glorified AIM+Webshots+Digg. The only thing it did different than most of those other products is successfully make the transition to mobile. Most other companies got replaced by mobile clones. Instagram almost did it, became the mobile Webshots, but FB swallowed them.


At a basic level it lets people stay in touch with family and friends they might not otherwise see. That's not a bad thing?


I would love for FB employees to explain to me how they justify that their product literally hooks fragile users to internet, creating chronic depression, FOMO etc etc.

To me, it is no different than Heroin given to a Junkie.

You just need to travel anywhere in the world and look at all the tourists spending their time Instagram-ing everything. On the other side of the screen you have fragile teens that get depressive because their life is not as good as the fake paradise that can be seen on Facebook//Instagram.

This is the culture that Facebook and "attention-grabbing" social networks have created. As an employee at Facebook, you directly contribute to that! Yes, it is a symptom of something more deeply broken in society, but you act like Heroin to a Junkie, facilitating the intake.

If we had to number this, I would probably put Facebook and other predatory attention-grabbing networks (Youtube is another one) in my top 3 of worst things going on in the world right now.


Lots of things can be harmful or useful based on how you use it. You can learn many useful things from the internet while there are people looking for child porn and recruiting terrorists. You can use a car to get places quicker or run people over. You can use nuclear power to generate clean electricity or make bombs.

A lot of people including me use Facebook services to stay in touch with my friends and family most of whom live thousands of miles away from me, which I find incredibly useful. People use FB groups to form a community around lesser known causes (like a rare disease or a dying language) and get in touch with people they might never have found otherwise. At the same time, Facebook can be used in harmful ways like you mentioned.

Sure, Facebook can perhaps regulate people using it's services to protect "fragile" people, but don't people deserve the freedom to do whatever they want with their time? Should we also have laws around how long you're allowed to use the internet every day because it's addicting and can be harmful?


Back at my hometown, a small village, we had a gangster. He made tons of money using his illegal empire. But he also took care of everyone around him. Even those who were not involved with criminal operations were taken care of.

The press wondered how could these people trust this gangster? What's their thoughts/feelings about smuggling etc being reported daily?

The answer was - "This so-called gangster has built orphanages and donated to religious causes. He has been net positive to society. While this gangster does some bad things, the press is harsher on him. Media knowingly writes bad stories because they are jealous of his success or have hidden agenda. He just wants to do the right things and his heart is in the right place. But his is human and makes mistakes. And before you label him as the worst do look at those other gangsters. They are just as bad, if not worse, than our hero."

This is a story which repeats itself over and over again.



You want to know the other side of Silicon Valley?

Download the app called "Blind", where people talk openly while being anonymous.

What you see there is that everyone is trying to maximize their TC (Total Comp). For most people, that's the only goal. The whole "Let's make the world a better place" is something that was sold by PR people.

To talk more specifically about facebook, I have seen two attitudes in their employees:

1)They use Whataboutism when confronted: Basically they will show you that Google and Apple are also evil, so it is ok for them to work at FB.

2) They openly say that they know it is a bad company, but they treat employee very well and they are overly well paid so are ok with it.

For me, It makes me lose faith in engineering


There's also a third attitude where people don't think FB is bad. In fact I bet a lot of people think it made/makes the world a better place.


FWIW, I found very little overlap between the opinions that were popular on blind and the opinions of my colleagues who opened up to me. The people who become active Blind users are the ones who are already frustrated and want to complain loudly.


> What you see there is that everyone is trying to maximize their TC (Total Comp). For most people, that's the only goal. The whole "Let's make the world a better place" is something that was sold by PR people.

Facebook targets 95th percentile comp. That's very good, but if you're an elite software engineer motivated only by comp you're not going to settle for 95th percentile.

Other reasons that factor into job choice for most engineers: teammates, location, work/life balance, interesting projects, engineering culture/practices


> For me, It makes me lose faith in engineering

I joined a certain tech company in the 90's when everyone said they were going out of business (if I don't call out the company it might be because I still work there). If money were what motivated me I would have learned to program Windows and headed instead to Redmond.

In fact I found that nearly everyone I met at said company were also there simply because they loved the company and its computers.

Times change? (AKA: Money ruins everything?)


FWIW, I'm an engineer and creating a stable financial environment in which to (hopefully) one day raise family is definitely a goal, but maximizing total comp sounds like a miserable way to go through life. If I wanted to maximize total comp, I'd go into finance.

I think folks generally forget that money is a tool to help you achieve what you want in life.

Anyway, thanks for the heads up about Blind, I hadn't heard of it and I'm quite curious now.


The dot com boom/bust was all about money, so not sure why anyone thought this time around was any different. People are in it for the money...it just wasn't until maybe 2010-2012 that it was confirmed that the money was really there.


Google and Facebook seem to put a premium on analytical thinking and nothing on qualitative. All this seems to lead to a monoculture of only hiring analytical people. Maybe that is why they are okay with the two options?


Going anon here, for obvious reasons.

People rationalize it two ways: 1. Whataboutism. Yeah, well, what about Google? What about Amazon? 2. Whatever, they pay me well and the perks are good.

My view is more nihilistic: Zuck is the same as he's always been. This is the same dude who said "They trust me, dumb fucks". Don't get it twisted: Facebook is about data collection and data sales. Everything else is a happy smokescreen diverting user and regulatory attention to that purpose. Connecting people and all of that handwaving is a means to that end.

You trusted him/us; dumb fucks. That's basically about it.


At some point, there is more to care about than money. He has plenty. He has more power than almost anyone in the world. Now he cares about legacy and impact on the world. Once he is gone, what is he remembered for, what change did he lead.

I dont think he is the same person, because he has it all now, and could walk away and never work another second in his life. He legitimately wants to make the world a better place for himself to live in.

That said "Never attribute to malice that which is adequately explained by stupidity." Just because he wants to make change, doesnt meant he goes about it in the right way. And because of the way the media treats him, everything out of his mouth is PR approved double speak.


A little background: I was an intern at FB just before and during the whole CA thing unfolded and I'm returning back for another internship, hoping I'll join full-time one day.

I had multiple offers from other Big-N companies and unicorn startups(some of them paying more than FB). But here's why I choose to still work at FB:

1) The values and goals of the company and the people at the company are genuinely positive.

No PMs are going around wondering how can we profit off of innocent people. Goals for teams are oriented around the "impact", for eg. how many small businesses are effectively gaining customers due to the new feature X, or is feature Y actually making people happy or if it's good for them in long term etc. An example would be Facebook trying to make more "meaning-full interactions" on the platform while sacrificing on users spending more time on the platform looking at click-baits.

2) The company and the leadership accept blame and work on fixing the mistakes.

Unlike some other fruit company, Facebook has always(although sometimes not as soon as it should have), accepted the blame for the repercussions of being the disruptive platform it is. People in leadership personally take charge of the issue and work to make the platform better. I know a lot of people believe Zuck is pure evil from what they hear, but I don't think a CEO who’s apathetic would be willing to pull the platform out of countries if the laws there don't fit with the company's morals and values. (Read - https://www.facebook.com/notes/mark-zuckerberg/a-privacy-foc...)

3) Every individual employee's voice is heard.

Anyone in any FB office around the world can personally ask the whole leadership any question every Friday in the company-wide Q&A. And most often than not, these questions are complicated. Misinformation on WhatsApp leading to lynchings, what are we doing about it? Foreign governments intentionally interfering in government, did we know about it and if yes what's our plan?

No question is a bad or stupid question. And yes, we get better answers than those PR statements. Knowing what happened in the technical gory details and what specific steps are being taken convinces us that necessary actions are being taken. I don’t know of any instance where the employees had to go on a strike and sign petitions to know why the company was working on some project which doesn’t align with the company’s morals.

4) Fb is doing something which has never done before, at a scale at which barely any company has ever reached.

All of us know how much responsibility we have working on such an impactful product. But that doesn’t mean we’ll stop trying new things. Those new things might not be good in the long run, but, there’s only one way to find out. If we think it’s gonna affect people’s lives positively, we’re gonna try it. Critique is expected and welcomed, even when it isn’t accurate. This may not morally align with everyone, but this is what I generally saw while working there and feel like it’s something I align myself with personally. Not being afraid to use the influence to try to launch something new and disruptive is not something everyone can agree with. And FB historically hasn’t done much research before launching products, but now seems to be taking the eventual possibilities seriously. It still doesn’t change the whole “hacking” culture of just trying out new stuff.

5) FB does care about privacy. Shocking, right?

From the limited 3 month view I have of the company, everyone is crazy about privacy. There are continuing discussions on how to deal with complicated situations, and teams dedicated to consulting other teams who might not have the necessary overview regarding the possible effects. Encryption is a hot topic, and research is being done on how bad actors can still be detected and reported based solely on the metadata, keeping the information private. And there’s no easy way to explain it, but media lately has been extremely biased against FB and reporting stories in a manner which distorts the reality of the events. Every small mistake(deliberate or not) will be exaggerated with click-baity headlines. I do believe that FB is completely responsible for whatever happened with CambridgeAnalytica, but reading some of the articles detailing it just made aware of the lack of technical knowledge and the intention to find out the bigger picture in most of the mainstream media.

6) We see the positive impact happening real time.

No newspaper will publish on the front-page about the FB communities which allowed LGBTQ people to connect in a country where’s it’s a crime. Or the fundraisers saving kids who got separated from their parents. Or the relationships which only came to existence because of the platform. Or the small businesses flourishing who previously didn’t have a chance to compete with the big dogs. Don’t get me wrong, bad things still do happen. There are instances of bullying, misinformation, stalking and a lot more. And people at FB are genuinely trying to limit that. But in the big picture, most of us do believe that we’re having a positive impact on the world. And if it’s not, we’re willing to change it, regardless of the company’s profitability.

TL;DR: Obviously I’m biased, I like that FB isn’t afraid to make mistakes and own then, genuinely think FB is doing good in the world and believe that it’s moving in the right direction.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: