Hacker News new | past | comments | ask | show | jobs | submit login
A whistleblower says Facebook ignored global political manipulation (buzzfeednews.com)
434 points by contemporary343 15 days ago | hide | past | favorite | 200 comments

None of this would be important if social media gave us what they originally sold us: See updates from your friends, family, and people you want to see updates from, in chronological order, rather than based upon weird engagement algorithms and privacy-destroying ad networks.

Why would they give up control of the world by doing something silly like that? Think about how much political influence Twitter has based solely on which tweets they show the President and corporate press. Consider how much untraced in-kind donations these companies can make by tweaking which news stories you see. The crazy thing about it is these things can be tweaked by humans, but it's largely controlled by AI now, which no one person will completely understand what's happening in any of these systems. We're in the early stages of AI controlling the global political future and it will tend to create whatever kind of future generates the most clicks. It's kind of like the game Universal Paperclips, except with clicks/rage/ads.

This needs to be more understood by the population.

Our world-destroying-AI paperclip maximizer is here. It's called a "news feed".

The thing that gets the most clicks is outrage, as the AI has discovered. We're setting people against each-other in a more and more efficient fashion.

The result has been clear since the Arab Spring. Good things don't come from helping people hate each-other in the most efficient way possible.

Banning AI-driven-click-maximizing news feeds would be a healthy start. Right now, they're doing serious damage to our world.

We are, but the blame has to be shared, too. In many cases the algorithm is being reinforced by your and everyone's actions. It may not even necessarily be specifically trying to prioritize polarizing content: that just might be the content you engage with the most, and the algorithm blindly follows your whims and preference.

I've used YouTube for many hours per day for several years. I've almost never seen a single thing appear on my home page that was polarizing or even clickbaity. The very rare times I do (always after I watch something that's kind of adjacent), I just click "Not interested" and I never see it or anything like it again. It's done a pretty good job of predicting what I would and wouldn't be interested in.

Same with Twitter. I just unfollow anyone I find tweeting polarizing or charged things. My Twitter feed looks pretty close to the HN front page.

Many people crave these things, whether they want to or even realize it. I think it's going to be this way for a very long time.

Sorry but this is a very naive view of human psychology. The Social Dilemma on Netflix does a good job of explaining why asking people to just exert more willpower is not the solution.

Maybe it is. I just am repulsed by stuff like that and try to get it out of my sight whenever possible. I figure most of HN is the same way. It pretty much never appears on any of my feeds, since I don't hesitate to click the "Not interested" button on the rare occasions it appears.

I'll watch that film, though.

The new AI is different than the algorithms of old. The old algorithms are one size fits all and everyone get exactly the same schlock. The new AI knows you individually and shows you as an individual the kinds of things that you are most likely to engage with. It doesn't matter at all if the kinds of things you as an individual are more likely to engage with have less inflammatory headlines. The system is still doing the same thing to you as it's doing to everyone else.

Right. But that was exactly what I was saying in my initial post (above my previous reply).

That is, until you click even accidentally on a single outrage clickbait video, then youtube would immediately suggest at least 20 other videoa of the same kind and you have to click “not interested” to videos for several weeks until your feed is sane again.

And IME, it doesn’t even have to be videos watched by most of the people, but a subset of “power users” who binge certain kind of videos are over represented on the recommendation engine.

>That is, until you click even accidentally on a single outrage clickbait video, then youtube would immediately suggest at least 20 other videoa of the same kind and you have to click “not interested” to videos for several weeks until your feed is sane again.

For me it takes a single round of a few "Not interested" clicks. Definitely not weeks, or even days.

No, it’s being driven by the lowest common denominator for engagement, which is distinct from personal preferences.

For a self-contained example, spam in large Facebook groups always rises to the top, because many people comment asking it to be deleted, causing the algorithm to show it to more people, some of which comment, until a moderator finally kills the post.

These kinds of side effects do not happen in a voting based system or a straight chronological feed.

For Facebook groups, yes, this is a big problem. That's one reason why I don't use Facebook. For one's personal feed, I don't think the same issue necessarily applies.

> just might be the content you engage with the most, and the algorithm blindly follows your whims and preference

No, it's the content that other people engage with. Disregarding the whole "engagement is a good metric of how much you want to see something" bullshit, if you served me food based on what other people like to eat, it would be a weird mix of gluten free vegan stuff, and also coke, pizza and doritos.

I don't want any of that shit. I'm not "many people". I don't need to be fed the same irrelevant garbage as them. But the only way to achieve that is to unfollow everyone and not get many useful updates. Which is what I'm doing, but it's just barely useful, and the popular crap still seeps in at every opportunity.

That just hasn't been my experience with Twitter or YouTube. (I don't use Facebook or other things.) I agree that would be very annoying if it happened to me, but for some reason it just doesn't seem to. I probably watch so many videos that it's just picked up my preferences well.

I do want to see what other videos are watched by large numbers of people who've watched things I've already watched. That's how I discover new and interesting things. I've found tons of great content that way, and pretty much no clickbaity or LCD / popular crap seeps in (maybe once every few months, but the "Not interested" immediately takes care of it). I don't know how common my experience is.

Nobody likes to eat gluten-free stuff; unfortunately, some people like me have to eat it.

Joe Edelman wrote a nice article about algorithms that are driven by metrics:

"This talk is about metrics and measurement: about how metrics affect the structure of organizations and societies, how they change the economy, how we’re doing them wrong, and how we could do them right."


Yugoslavia was well before the Arab Spring. Of course, that was broadcast, not clicks, but outrage was stoked.

(In Lonnie Athens' theory of violentisation, there's a step where the to-become-radically-violent says to themselves, "I'll become the biggest badass and this will never happen to me again." Milošević has a famous TV speech where, after a spot of "spontaneous violence", he tells ethnic serbs he isn't going to let them be beaten anymore.)


Yes, this is something that I have realised of late. Very few people except maybe the most psychopathic, start by believing that they are evil. Most of us believe that we are being victimised. Maybe I am an unemployed youth, outraged at the fact that some minority got a job while I am unemployed. I completely ignore the fact that I am not as qualified as the other person and comfort myself with some victimhood myth.

These minor grievances are later amplified by demagogues. The reaction to this perceived victimisation is so out of proportion that the initial grievance looks minuscule in retrospect.

The main difference these days is that we don't need a visible demagogue. The various "engagement algorithms" play the role of an echo chamber which sends opinions into a positive feedback loop until it enters the regime of social collapse.

Yup. The Rwanda genocide was incited by radio.

You could also argue that the original rise of Fascism was enabled by mass propaganda becoming possible where it wasn't really previously, but that's so overdetermined it's hard to attribute to particular causes.

Then there's the original SARS of bad memetic propaganda, the Protocols of the Elders of Zion. Piece of leafleting from the early 1900s that's contributed to the deaths of millions.

And social media is actually less dangerous in this regard then past media like radio and books, because it makes it easy to comment in a way that reaches the consumers of the original media. You can run a radio channel where you rail against ethnic prejudice, but the listeners of radio rwanda are unlikely to listen to your channel. You can write a book refuting the "Protocols of the Elders of Zion" but the readers of the original are unlikely to read your book. Social media is different. If someone incites hatred against ethnic groups on facebook you can respond and reach the readers of the original post.

> If someone incites hatred against ethnic groups on facebook you can respond and reach the readers of the original post.

"Just tell people that racism is bad"

Somehow this seems to make the problem worse not better, because if you're one of the outsiders you just get added to their mental list of conspirators. And if you've commented under your real name you're at risk of reprisals.

Effective debunking is hard.

Oh, and algorithmic social media makes this worse - your act of commenting on a post tends to raise its profile and cause it to be shown to more of your followers.

>> If someone incites hatred against ethnic groups on facebook you can respond and reach the readers of the original post.

> "Just tell people that racism is bad"

What do you think "inciting ethnic hatred" means? Telling people that ethnic hatred is good? So that the only possible reply is that actually ethnic hatred is bad? No it involves conspiracy theories about an ethnic group, accusations against them, in short, empirical statements that can be refuted.

They can be contradicted, but it's very hard to definitively refute them in a way that will convince somebody who wishes to believe it. Even people without a stake in it will often conclude, "I heard X, I heard Y, sounds to me like it could go either way." If anything, they're taken in by how short and empirical those statements are: it makes them seem more true because after all, if it were false I'd see the evidence myself.

There is no statement of fact so definitive that it can overcome a vast barrage of inflammatory conspiracy theory. If that's done in front of a crowd looking for a simple solution to their problems, they'll pay no attention to refutation of "short, empirical statements".

> "I heard X, I heard Y, sounds to me like it could go either way."

Still an improvement compared to "I heard X", which is the default with radio and books.

I'm not so sure it is, when there are so many well-funded, concerted efforts to deliberately induce ignorance in people. Radio and books have at least some bottleneck where you can at least try to cut off deliberate falsehoods.

Obviously that doesn't always work, but social media makes it effectively impossible. Not only do you have official sources of propaganda, you live in a swamp of anecdotes that are impossible to refute or contextualize. The radio says "Group X is bad", and you yourself may never have had any trouble with X, but a flood of "X did [bad-thing] to me" stories on social media can make you think that you've got personal exposure to it.

Ultimately, I believe the problem rests with the individuals who are willing to be manipulated. The human organism has a massive security hole that has been very well exploited, and I've got no idea how to patch it. No amount of coherent, cogent argument will change the mind of somebody bent on having a target to hate, and people have gotten very good at insinuating those targets.

> I'm not so sure it is, when there are so many well-funded, concerted efforts to deliberately induce ignorance in people. Radio and books have at least some bottleneck where you can at least try to cut off deliberate falsehoods.

I don't see how you can believe these things at the same time. If these efforts to induce ignorance are indeed so well-funded and concerted, how can bottlenecks be an obstacle? Indeed why couldn't these well-funded, concerted efforts use those bottlenecks to cut off people disagreeing with them?

I meant to suggest that if you have a few large, expensive propaganda organs, you have some hope of running your own counterpropaganda campaign: disprove it at the source, and maybe you can convince people.

Of course if the money can shut you down entirely, you can't succeed. But it's hard to completely control radio and books. It can be done, but only by making your totalitarianism clear. It's more effective if people think they came to their conclusions on their own, and have been exposed to "all sides".

Today, they can bolster their organized propaganda with a sufficiently effective astroturf campaign (magnified by social media). The two provide confirmation for each other, and appear to be independent. They may even actually be independent; they don't need to formally coordinate if they roughly agree on the ends.

So people believe they're getting "all sides" of the story, and there's no need to shut down disagreement. Instead, people hear the message from two sources that affirm each other, and disregard any disagreement willingly.

some ideas on changing minds (not by coherent, cogent argument, but by providing attractive alternatives to hate?): https://news.ycombinator.com/item?id=24373042

also, the remedies chapter of: https://drive.google.com/file/d/0BxxylK6fR81rckQxWi1hVFFRUDg...

still looking for more in this vein.

from HN, the discussions on https://theintercept.com/2016/09/07/google-program-to-deradi... suggest attempts to de-hate might not be uniformly well-received.

> You could also argue that the original rise of Fascism was enabled by mass propaganda becoming possible where it wasn't really previously

Genocides and other war crimes have been a staple of humanity since Ancient Greece. Authoritarianism, of which fascism is a subset, only depends on one thing, and that is masses willing to be led.

...and technologies, including mass propaganda, have extended their reach and scale. They didn’t have trains delivering millions to gas chambers in Athens, helmed by people listening to Hitler’s speeches on the radio.

Saying “x has always existed” is different from the scale at which x is present. That is, in fact, one way that we measure progress. Inarguably, we will never be able to stamp out our baser instincts and behavior, but we should strive to reduce their presence and impact.

> We're in the early stages of AI controlling the global political future

If you just broaden your definition of AI a little to include the collective decision-making processes of a market-based corporate-controlled press and political system, this has been going on for decades, if not centuries. It's only recently that computers got added to the mix and inevitably bubbled to the top of the heap. Or, I should say, helped bubble some people to the top of the heap.

This is exactly what I'm getting from twitter. My feed are the latest updates shared by the people I follow. I don't see clickbait, I don't see outrage, I have a curated feed aligned with my interests. It's true that about once in a couple of months Twitter decides to switch the order of tweets from the latest to the most recommended, but that's easily fixed in two clicks. The moment twitter removes the option to see the feed in chronological order is the moment I delete twitter.

On a slight tangent ... the one thing about HN that lets me breathe is that the links that turn up are the same for everyone on the planet and is not "personalized". If it were, I'd be gone in a jiffy.

Maybe we should rename "personalization" to something with a -ve connotation - perhaps "bubblification"? "narcissization"? "comfortzoned"?


Ah yes! Let's just call it the "spoon feed".



Facebook had this but everyone decided to friend every person they ever came into contact with, leading to an unmanageable stream of nonsense. Then they introduced the algorithmic feed and it became more manageable.

I personally was fine curating my feed and only friending people I wanted to follow, but that's just not how it came to work socially and culturally.

If only Facebook chose to make less money...

Seriously though, their choices to run the platform the way they have were fundamentally shaped by profit and the stock market. The type of corporate moderation you’re suggesting doesn’t exist.

Well – Craigslist and Wikipedia come to mind as counter-examples but it is rare.

Just for the record, Craigslist is a for-profit company and makes quite a lot of money. Very different from Wikipedia

> Craigslist is a for-profit company and makes quite a lot of money.

I thought they just made a modest amount of money.

However I believe their benefit to society is significantly higher than fb just because of R number 2. (of "reduce, reuse, recycle")

They grossed $694M in 2016, according to Wikipedia[0].

I think Craigslist benefits from a transparent business model that doesn’t rely on capturing and manipulating consumer data. They charge for job ads, which is a pretty straightforward model. Contrast that with the miasmic, tailored advertising business that Facebook has created.

Craigslist isn’t necessarily leaving money on the table, they just operate a more transparent business that’s more closely aligned with the public good. It’s still not realistic to expect companies to not try to maximize their earnings potential, that’s like asking your favorite football team to score fewer touchdowns. Facebook as a business is fundamentally misaligned with the public good.

0: https://en.m.wikipedia.org/wiki/Craigslist

They charge for a couple other things, FWIW, but it's mostly commercial postings and certain other big-ticket listings (cars, real estate, services).


Yeah, I think that saying that people are forced to optimize for profit and harm society in the process diminishes their personal agency. Even if you're starting a new venture, you have a lot of choice in how you do it.

Wikipedia and Craigslist are great examples. Craigslist is for-profit, and Wikipedia was originally a project of Bomis, a "web portal" (those words meant things 20 years ago) that tried various things to be profitable and had a good amount of success with softcore pornography. Jimmy Wales was quoted as saying, "You know the press has this idea that I am a porn king. I really wasn't a king of anything, frankly, you know? Because at the time, when we looked at it, we were just like, 'Okay, well, this is what our customers will want, let's follow this.'" And even he, the guy peddling pictures of naked women on the internet because it's what his customers supposedly wanted, ended up spinning off Wikipedia as a non-profit and winding down Bomis.

When you decide to make a company that values profits above all else, it's a choice. You cannot attribute the moral impact of that decision to the invisible hand of the market - the invisible hand cannot reach out and grab things that weren't given to it.

This is like a tragedy of the commons situation where there are massive, life changing incentives for people to break the commons. You can’t run a society where you hope that all individuals will be good.

What I've long wondered is- would it really be that hard/expensive to build an open source social media alternative that does exactly that? Updates & photos from friends & family, in chronological order, and little else. Social media has been around for a while now, I have to imagine that most of the hard problems around a customized feed and so on have been solved. Probably some idealistic ex-FB and IG engineers would join on, so we'd have their domain expertise. I bet some prominent, wealthy anti-FB types could kick in some seed money to get it off the ground. It could be set up as a non-profit, B corp or foundation of some sort.... You could run non-targeted display ads for brand advertising to help cover costs, with the added lure for advertisers that the site would be brand-friendly because it's non-controversial.

Of course it wouldn't have sophisticated features like photo tagging and such, and probably wouldn't be Hip And Cool for Gen Z, but it could be a functional bare-bones Facebook replacement. You'd have probably have to disable virality features, and maybe linking to external news sites just to prevent your racist uncle from posting Breitbart links, I don't know.

Would that really be so hard? Or do the servers, hosting, security and moderation costs just scale exponentially after some threshold of say 10 million users or what have you? Supposedly Instagram was running with a very small team when Facebook acquired them

I think you misunderstand.

People want to be mad. They like it.

It's a serious problem.

I have two Twitter profiles. One follows only makers, tinkerers, artists and educators. I mark "do not want to see more posts like these" if anyone posts something political.

I have another one that follows people with strong political views and the latest outrage.

Guess which one makes me feel better when I view it?

Guess which one I find myself viewing more often?

100%. I do the same thing. On my "safe" account I have 80+ muted phrases (mostly political). On my corporate account I don't really have that luxury.

Same feeling.

Building the app is trivial, building the network of users is the hard part.

The users are already fragmented and / or using multiple social networking platforms / apps.

Maybe you only need to have some tiny fraction on board to make it worth the effort?

Based on a quick glance at the fediverse sites, I think you're looking for friendica.

Link for the curious: https://friendi.ca/

The reason I'm still on Facebook is "events". I want to see which events my friends go to. The problem is getting the organizations that host these events to post their events to other platforms.

And what happens when people start discussing politics and advertisers don't like it?

also check out https://joinmastodon.org/

I think recommended order has its own place. If I go to a social media platform after a week and my friends have posted about 200 posts since then, will I be interested in every post equally? Isn't it better to give me what I will probably like at the top part of the list.

>See updates from your friends, family, and people you want to see updates from

Those same people are often the ones sending misinformation and creating the problems. How do you allow people to post updates and not allow them to spread misinformation that damages society?

If people were willing to pay, "privacy-destroying ad networks" which need to be fed by "weird engagement algorithms" wouldn't be necessary, but people demonstrably won't. Ergo...

Blaming viral content on bad algorithms is naive. All that’s needed for fake content to spread everywhere unchecked are a few bad actors, group messages, and forwarding (the reshare button). In some cases this results in genocide [1]. No fancy algorithms are necessary to get exponential spread of rumors. Friends and family will spread any memes that confirm their biases themselves.

To prevent this from happening, it has to be actively suppressed, or at least there needs to be something slowing it down so it dies off. A hands-off attitude isn’t going to do it.

[1] https://gizmodo.com/facebook-still-working-on-the-whole-geno...

I think this is a good point. Recommendation algorithms exacerbate the problem but they're not the whole story. I don't see how you solve this problem without eliminating the virality that's at the heart of these platforms' profitability. They have a strong vested interest in not fixing it.

They didn’t sell it to you, is the thing.

I am surprised this segment (admittedly picked from Ars's secondary writeup) hasn't made a splash:

"It's why I've seen priorities of escalations shoot up when others start threatening to go to the press, and why I was informed by a leader in my organization that my civic work was not impactful under the rationale that if the problems were meaningful they would have attracted attention, became a press fire, and convinced the company to devote more attention to the space," Zhang wrote.

That is a damage control role. Perhaps more tellingly, it highlights the entire organisation's priorities: if it isn't drawing press attention, ignore it. Of course that's not the phrase FB would use in a press release. They'd deploy a convenient euphemism, such as "dedicate the resources elsewhere".

Facebook is rotten from the core because it is rotten at the head. It is a very sad state of affairs that the one company that would have been able to be a force for good in all this ended up being run by someone who is so morally disconnected.

At this point in time it will take an act of God to fix it, the lock-in is very strong and their ability to buy up anything that even begins to compete with them serves to cement that lock-in to the point where I don't see a way of ever dislodging it.

> their ability to buy up anything that even begins to compete with them...

There is a key difference being being able to afford to buy anything, and actually being able to buy anything. The competing founders need to be willing to sell. If a competitor arises that is mission-driven, Facebook cannot buy them.

We haven't yet seen Facebook attempt to strangle a competitor that it cannot buy. If Facebook faces an existential threat, there is a lot of capital available to fund its defensive campaign. Buying is just cheaper.

A direct threat to Facebook is unlikely to succeed. More likely is a competitor based in a niche where Facebook cannot enter for structural reasons that slowly out-competes Facebook on Facebook's turf.

One possible niche is the privacy-focused community. Facebook can't flip all of those users, ever. If something privacy-focused emerges that has low-maintenance utility for the broader world, Facebook's days are numbered.

Privacy focused non-profit (open source?) social media platform that is easy to join, maybe there is a tool that scrapes your facebook profile and imports it automatically.

Can anyone help me brainstorm what features would be compelling for a privacy focused social media/event planning/group discussion platform?

To dance that dance, one should first be aware of https://en.wikipedia.org/wiki/Facebook,_Inc._v._Power_Ventur.... .

Facebook's network graph is its single most-valuable asset. Responsibly placing it back into the hands of its users will require some subtle care, both to avoid Facebook's ire and to avoid abuse.

I'd like to see you stare that Billion Dollars in the eye and say 'no thank you, these are my principles'.

Bonus points if you can do it with your spouse and children by your side.

I'm presuming you are talking about a generic "you", and not referring to me personally, because I've been working in (mostly) mission-driven companies for a decade. I do take a hit on my compensation and wealth for it, with my spouse and children by my side the whole time.

But granted, mission-driven people are less common. To the point that many people who are not fundamentally mission-driven truly do not comprehend that some of us are. We all have our own motivations.

Props to you. I suspect there are more of 'you' (the literal you) than you think, but - and that's the trick - they don't go advertising it.

Can you name some of the companies? Would love to hear some of them considering every company thinks they are “mission-driven”.

Instagram did that. However for social networks you make money from ads, so they’re rotten in a different way.

The hell they did. They took the money and ran.


I am dumb. I was thinking about Snapchat. Sorry.

What do you mean? Instagram was bought by Facebook in 2012 for $1 Billion. So they literally looked the billion dollars in the eye and said "OK!"

Most competing founders have a price. I would be morally torn myself if I had the choice between defending my ideals vs securing my family's financial future forever.

I do think that's a poor way to handle the situation. Playing Devil's Advocate here, but is there a better way for that organization, in their fragile political position and immense power in the world, to handle situations like that? The sheer amount of bandwidth to moderate and manage every piece of data responsibly is so vast, there has to be a way of ranking content and trends and to allocate resources to them.

It's not morally sound to have an organization optimize for public image, but they are THE public image platform. Optimizing for social good would be so much better but that's really hard to track and quantify and it can be so divisive on some topics. Esp. now that trends marked "social good" are so quickly developed and redeveloped as other things.

Simple: devote more resources. People, money, teams, management support.

Even if the rational reason for being slapdash is lack of resources and lack of focus, that means the company is not investing enough on addressing the underlying problems. Things happen, and sometimes you need to jump to put out a fire - but if putting out fires is a routine mode of work, it's a symptom of a much wider problem.

I am under no illusion that trying to protect against disinformation and propaganda would be easy. As you said, the volume is so vast that it's going to require a lot of effort and immense focus. Constant firefighting shifts the efforts, and deprives focus.

Whether that's intentional or an emerging property, the end result is the same.

Suppose you have enough resources to try to do it "right". How do you moderate in a way that's globally fair? There is disinformation (which should be moderated), but there are also legitimate disagreements (which should not). Even facts themselves are often open to interpretation, and people don't argue in objective terms.

The degree of nuance leads me to believe it's just impossible to do at commercial/global scale (though I would agree it could be done better than it currently is).

That isn't what's being discussed here. This is the case of artificial amplification of opinion.

One person : one opinion seems a fair goal.

Posting all day about something as yourself is very different than running a network of accounts.

I absolutely agree that the person in the article lacked a significant amount of management support, that was a complete failure of her team

Facebook trying to "optimize" the world for "social good" is probably one of the most frightening ideas I've seen mentioned this week.

If they are managing to do so much harm through indifference and greed just imagine what they could do if they were explicitly trying to manipulate the world for "good"?

Powerful groups who try to optimize the world have a pretty poor track record compared to those who just try to do their own stuff right and let the world sort itself out.

well if you want vast profits that come from taking on that role.. why not be expected to do it responsibly.. it's not like we couldn't and didn't get along just fine without facebook...

They could allow a NGO/consortium in each country to do the moderation. No reason it has to be done centralized by Americans, based on American values.

> No reason it has to be done centralized by Americans, based on American values.

So let's handle moderation in... let's say Saudi-Arabia, by having them silence LGBT and women's rights advocates?

And who is responsible for Saudi-Arabian citizens in exile? Saudi-Arabia? The country where they live in? The country where they are currently on vacation?

This is one of those "only winning move is not to play the game" situations; if you're marketing a product - almost any product - into a repressive regime it'll end up being associated with repressive acts.

Hence the boycott of South Africa.

There was a brief window in which Twitter was allowed to spread dissent pretty freely in the Arab world. I've no idea what that looks like now.

Wow, that creates a really weird incentive for him. He'll get paid more and promoted if he secretly becomes a whistleblower.

I believe Sophie Zhang is a woman.

Oops :[

Pretty much every organisation behaves like that for things that don't threaten the organisation itself. After all, if someone gets killed by a Facebook mob, all that happens to Facebook is the user engagement numbers go down by one.

The pattern of hiring young, passionate, ambitious workers, then telling them their job is of critical importance to the company (and, in this case, society at large) while simultaneously underfunding their team and providing them with completely inadequate leadership is REALLY common in Silicon Valley companies. These same companies will actively stigmatize saying "it's not my job," and so you have very green employees who end up doing work that's wildly outside their zones of competence and comfort, internalizing all the stress that builds up along with being put in that position and not even understanding that speaking up is an option.

Many of these people lack the experience required to see the forest for the trees and they draw similar conclusions to the ones in this memo. "There's no bad intent, we're just overworked and underresourced" (paraphrased) is something I've heard time and time again from people working on supposedly important problems at companies making money hand over fist.

Companies do this a lot with college grads, they sell them on a vision that they will have high impact and an important role in order to get them into their hiring funnel. It's not necessarily an operational failure more than it is a sleazy marketing tactic to prey on the lack of information by young ambitious people. Though it also results in an operational failure and a terrible waste of young talent.

It goes beyond the hiring funnel. These narratives are pushed and often believed internally. The people who actually do the work, who understand that things are not how they should be but don't understand why, who are passionate and ambitious enough to assume personal responsibility for the outcomes of their efforts nonetheless (as if a junior-to-mid-level data scientist receiving radio silence from their management chain can reasonably be expected to protect democracy in places like Ukraine and Azerbaijan), often don't understand that they've simply been put in a fundamentally dysfunctional situation - and it's rare that anyone will actually sit them down and explain that to them.

I really like what you said here and yeah, it's tough b/c I don't think anyone would explain this to them

I feel like we have similar values, do you mind if I follow you on twitter or reach out over email to talk more?

I would have liked somebody to sit down and explain this to me.

It seems strange to call it sleazy and say they're "preying" on young people when Facebook and other big tech companies end up paying some of the best money for new grads outside of pro sports.

They do a year a Facebook, get disillusioned, and then spend 2 minutes finding another job. Not exactly heartbreaking.

Once someone has been anointed a "whistleblower," it is a bad look for you to try to play devil's advocate to whatever she's saying.

Stepping back, without a media circus, how really do you expect to change anything at organizations this large and powerful? Facebook transcends governments dude, Mark Zuckerberg has an unfathomable amount of money and power. At least give her some credit for putting out a non-conformist opinion.

Also, with regards to your specific points, everyone is qualified to determine political bots are bad. I can't believe you're going with the, "Well she is missing the nuance oh and she gets paid a lot of money so there!" take here.

I'm not trying to play devil's advocate, I think you misinterpreted my post.

> Also, with regards to your specific points, everyone is qualified to determine political bots are bad. I can't believe you're going with the, "Well she is missing the nuance oh and she gets paid a lot of money so there!" take here.

The point I was making in that second paragraph is:

- Company A is wildly profitable.

- Company A says publicly and internally that solving Problem Z is very important to them.

- The team tasked with solving Problem Z at Company A is understaffed, underfunded, and lacks support from leadership.

The "forest" being missed by the people working on Problem Z is the unfortunate reality that Company A does not actually care whether Problem Z is solved or not. In fact, to the extent that solving Problem Z might hurt the bottom line they probably prefer that it remain unsolved.

I use placeholder terms because while this particular story is about Facebook, this same thing plays out all the time at darling companies in Silicon Valley (and presumably outside, but I've less experience there).

I respect the memo author immensely. In similar situations I've just left quietly. The author probably should've done just that, now their name is directly linked to this story.

Don’t forget about getting them hooked on the money and perks.

Based on the substance of the decisions the author of this memo claims to have been making, I suspect they were drastically under-compensated and received insufficient perks.

This is pretty frustrating, clearly she said that she wanted her privacy respected, they even acknowledge that in the article, why did they publish her full name and a short description of her linkedin just to make it even easier to find her? What motivation did they have to do this?

But they hid the name of the software engineer that spoke on her credibility? Something seems a little off, either on the source's side or on the distributor's side.

Maybe the same reason why a journalist tried to dox Slate Star Codex. Which is to say, who knows but it probably isn't good.

It's really uncomfortable reading the part where they say that they aren't publishing personal information from the memo as if they're doing something to protect her privacy when they've given the internet way too much information on this person already. Things like the state of her mental health, some signal into her economic status, etc. Things that even non-devs on the internet could pick up on, it's so tone-deaf.

It sounds like she published an internal memo, somebody else leaked it, and the personal information was a fourth party's.

I hope that person that leaked her memo had a ridiculous amount of social capital built up with her, though I seriously doubt it.

Reading it over more closely, the article doesn't exclude the possibility that the person who leaked the memo was actually Ms Zhang. The anonymous "other software engineer" might just be another source the authors know, or even a reference given by the "leaker".

The simplest answer is that journalism relies heavily on credibility and BuzzFeed's brand has almost none of that as far as the public is concerned. If this was the WaPo or NYT they could probably get away with the usual "We'd tell you but that would be unprofessional."

Yet NYT still threatened to doxx Slate Star Codex.

> Still, she did not believe that the failures she observed during her two and a half years at the company were the result of bad intent by Facebook’s employees or leadership. It was a lack of resources, Zhang wrote, and the company’s tendency to focus on global activity that posed public relations risks, as opposed to electoral or civic harm.

> “Facebook projects an image of strength and competence to the outside world that can lend itself to such theories, but the reality is that many of our actions are slapdash and haphazard accidents,” she wrote.

> “We simply didn’t care enough to stop them”

This is the key takeaway, IMO. Not as an excuse for Facebook, but as an indictment of "slapdash" information technology in general, particularly social media. It's becoming more and more clear that "bringing the world closer together" is a pandora's box, one that Facebook is not equipped (motivated?) to deal with the consequences of. Maybe no company ever could be. Maybe this is simply a thing that shouldn't exist.

“A lack of resources” is not a thing a 500+ billion dollar company gets to lean on, and Sophia does acknowledge that.

This is the old “don’t tell me what you care about, show me your balance sheet and I’ll tell you what you care about”.

Right, which was the reason for the "(motivated?)". It is a genuinely hard problem, and I wouldn't be surprised if it's intractable even for a motivated organization, but at the very least there's no profit-incentive to put real effort into solving it. So for one reason or another, I don't think any company can be relied upon to keep a handle on this problem.

“Move fast and break things,” in other words.

I think it’s more an indictment of the former employee’s naïveté. Who, at this point, doesn’t think that the point of Facebook is to engage in manipulation at a global scale?

"“I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count,” she wrote."

The scale of how the platform's being used for political manipulation in every country is enormous, and it's clear that if a junior data scientist is having to independently make these decisions, that there's little interest in proactively dealing with this.

I have real doubts about how big this power of individuals are.

But if random FB engineers actually can affect global political power, it is 100% certain that the intelligence services of the world are putting serious efforts into placing their own people in these positions!

With real estate and rent so expensive a protest by a junior employee pretty much ends in moving back with their parents or homelessness.

Nah. Even were I a junior employee, I could go work at any company offering a six figure salary and continue to live comfortably without roommates in SF. Rent is exorbitant but you still end up with half your post-tax software engineer salary as disposable income.

From what I've seen of FB salaries, if an engineer is living paycheck to paycheck it's probably their own fault

What is the public interest in publishing her name after she has expressed concerns about her safety? Shame on Buzzfeed.

"In her post, Zhang said she did not want it to go public for fear of disrupting Facebook’s efforts to prevent problems around the upcoming 2020 US presidential election, and due to concerns about her own safety. BuzzFeed News is publishing parts of her memo that are clearly in the public interest."

She's not exactly going at lengths to hide her identity, I'm watching it blow up on twitter with her named by full name.

Before or after Buzzfeed doxxed her?

According to a mutual friend, apparently buzzfeed released all this information without her permission. So you are correct, buzzfeed doxxed her.

It’s journalism 101. You provide the identity of your source to help the reader evaluate his credibility.

Anonymous sources are supposed to be used only in extreme circumstances. But these days that gets abused all the time.

The New York Times has published its rules for making a source anonymous, and they’re pretty good, IMO.

They didn't provide the identity of their source. They instead doxxed a third party who wished to remain private for her safety.

Isn't source in this case an alleged whistleblower who leaked this note? I'm assuming it wasn't original author who send this to buzzfeed.

I get frustrated over these pairs.

What are Facebook supposed to do? They could spend billions moderating every comment and like, but they'd piss off every politicians world wide and the users would all cry censorship (and that's if they get it perfectly correct). They could pick a side, but the same would apply with slightly fewer pissed off people. They could do nothing and save billions and piss off less people.

And in the background, a small number of people continue to manipulate everything you see in legacy media, and no one really cares because we're used to it. Seriously. What the fuck?

You're right, moderation at Facebook's scale isn't feasible. It's the scale itself that's the problem. They've focused so much on "making the world more open and connected " that they never stopped to think how that could be weaponized.

If you focus on how everything could be weaponized you are a short sighted paranoid a skip and a hop away from fascism. Even tyrants aren't stupid enough to ban all hammers.

How about stopping the idiotic tool blaming and blaming the bad actors?

Where did I say they should focus on how it could be weaponized? I'm saying they never seemed to have even considered it. And that's a charitable take; they could very well have considered it and decided it wasn't worth their worry.

It's all in balance. Bad actors are responsible. But systems designers have a responsibility too. If you optimize a system for something (e.g., engagement), that's a reflection of your values. We're seeing the ugly consequences of Facebook's values.

If spending billions is what is required to make their product not harm others, then that's what they should do.

If that means the business doesn't break even, maybe there should be no Facebook.

That's fine. But if that's the criteria we use you have to shut down every newspaper, radio and TV station until they're harmless too. And that's just the start, wait until you hear what big oil and big tobacco and the defense industry have been up to.

Is that really what you're asking for here?

How in the hell could you think that scale of moderation needed for facebook is the same as newspapers, radio and TV stations?

I didn't say it was. I said they need to be made "harmless", that's the standard the comment I replied to says we require.

Apologies, seems I had a brain fart.

No worries, sorry if I was a bit blunt :)

I can see it in my least tech savvy, least educated friends. As Facebook users they seem to be radicalising the longer I leave them to the devices.

But what’s the alternative?

If people want family & friends social media, where to go?

Aren’t the open/alternative platforms just as open to abusive, if not more so, as no-one like the whistleblower is even hired when it comes to open platforms?

The alternative is introducing some regulation around patterns of usage.

Bunch of votes few insights; I’ve posted this as a Ask HN https://news.ycombinator.com/edit?id=24479271

What's scary is that with all the resources that FB has, it still has to prioritize enforcement, which means that platforms like reddit or even HN have no chance of catching this.

Missing from article is any causality between Facebook bot farms and any real world effects, election outcomes or deaths. It just says, oh there were a million fake likes on a post in this country... months later some political unrest. Like this has never happened before Facebook?

Bots and fake accounts are another form of advertising. Governments and political parties manipulated, influenced or controlled legacy media. Online and offline politicians are disseminating misleading political ads. Partsian news networks attack their opponents all day long while claiming to be objective.

On the surface the outrage seems misplaced. This seems like business as usual.

Perhaps the outrage isn't misplaced if the goal is regulatory capture and entrenchment of the social media space. Imagine a world where "fact-checking" and identity verification is mandated by regulators as a prerequisite to posting online. This wave of censorship will be buoyed by a tide of righteous indignation.


Something I have always failed to understand is why there are people who still work for this company. She states “I know that I have blood on my hands by now”; doesn't everyone who works there? At this point, it is well known by everyone that this is a product flawed to the core. It is maintained by a company that insists is not a media company to evade all social responsibility, and insists that its AI will solve the unsolvable problem of moderation at scale. Ethical alternatives of federated social networks already exist. Why do people still work there? Do they not care?

A lot of people don't care about ethics and mortality in their work as long as they get paid and get to go home to a house/apartment at the end of the day.

I think you meant "morality" instead of "mortality", but I'll go with mortality anyway.

“One of the big tools of authoritarian regimes is to humiliate the opposition in the mind of the public so that they're not viewed as a credible or legitimate alternative,” she told BuzzFeed News. “There's a chilling effect. Why would I post something if I know that I'm going to deal with thousands or hundreds of these comments, that I'm going to be targeted?”

That's not just a tool for authoritarian regimes, that's pretty much the most used tool in any form of political conflict, in any country.

It’s weird watching Neal Stephenson novels come to life: miasma, apm, corporate-states, virtual worlds, mind viruses.

This sounds really bad. In searching the web for this title it seems like small news services are running this. To be fair to Facebook, I am willing to wait a day and see what they and other organizations/news/people say about this disclosure.

So what happened to Facebook's "real names" policy? If they got serious about that, fake accounts would be less of a problem.

It's still a policy and they try hard to enforce it, but despite taking down literally billions of fake accounts each year, it is hard to stop 100% of it.

Is it just me, or does it seem like, with both social media and “tech” in general, that the ‘regulation axe’ is grinding - and that it is only a matter of time that these algorithms core to these companies’ business models ‘suffer’ from likely blunt, harsh regulatory instruments that will broadly stop this kind of influence and manipulation.

By doing so, it will also significantly harm these business models (and valuations) as we know it today.

Indeed! Social media amplified vaxxers, climate change deniers and all the possible fringe lunatics you can find.

The social media should be much limited by the physical aspect, for example, if you never met the person in a real-life event, you can't connect with it via social media.

Throw out the news feeds. Throw out the ads alltogether (because political propagandists will always find a way if you want to make the distinction).

Oh there are always idiots trying to demand that cars be disassembled and tossed in the underbrush if they frighten horses.

What those grandstanding demagogue idiots want is about as relevant as all of the other non Section 320 parts of the CDA that got thrown out by the courts.

The article claims that these kind of manipulation caused them to be reported by international news but this is the first time I ever hear about any of the examples listed by the article, which leads me to believe that these kind of manipulations doesn't really have that much power.

Love the use of the term "inauthentic". They cannot say "fake" anymore.

The term inauthentic is more accurate. Someone paying 100 people to go troll or like things is causing "real" activity, but it's not "authentic".

It's also a clever little bit of legal CYA. Coordinated inauthentic behavior' = fake accounts amplifying things. Actual extremists posting on main and organizing on FB? A-OK as long as they exercise a bare minimum of discretion and avoid discussing specific illegal activities.

I'm not sure I follow your logic. "fake" seems like a completely wrong word to use to describe actual extremists (but I completely agree they should do more to police extremist content).

Coordinated inauthentic behavior describes exactly what it says on the tin. They try to limit coordinated inauthentic behavior even if the content is true and even if the users are "real" people (e.g. workers who are paid to click/promote/create content).

I'm saying FB heavily emphasizes its efforts against coordinated inauthentic behavior as if that were the only significant problem on its platform.

Extremism itself is a vacuous term and normative. Anyone calling for the end of segregation, the end of aparthiad or for gay marriage was once "extremist". So yes it is perfectly A-OK for actual extremists operating under a bare minimium of discretion.

That's a very disingenuous take, but those seem to be the norm for you. People calling for an end to apartheid have not been historically labelled as extremist (other than by apartheid regimes themselves) because apartheid has not been a norm in recent decades, and because the causes you mention all involve the expansion rather than the abridgement of rights, upholding a long-established egalitarian norm.

But to make you happy, I'll qualify my statement, and say that Facebook has been A-Ok with violent and frequently genocidal extremists organizing on its platform.

The only way anything will happen to Facebook is if these three things actually happen in sequence and within a short period of time of the first event occurring.

1) Facebook wittingly or unwittingly ignores political manipulation on its platform within the United States of America that demonstrably affects US political outcomes.

2) All necessary parts of the US government required to hold a corporation like Facebook accountable for 1) act in concert to do so.

3) The US mainstream media extensively reports on 1) and 2).

>Zhang said she turned down a $64,000 severance package from the company to avoid signing a nondisparagement agreement.

You really have to ask yourself what kind of place it is you're working for and what you're building, if a totally regular employee basically is paid hush-money to not speak about their job.

This isn't a private business any more, it's the mafia. People talk a lot about the culture of free speech and the rights of end-users, but we live in a world where a private company that builds a social media website, this isn't the NSA or anything, can stop an employee from speaking the truth.

It's time policy makers throw all of this out of the window, together with the anti-competitive non-competes that at this point affect IIRC, almost a fifth of the American workforce.

I know someone who got extra severance conditional on non-disparagement when leaving a pretty modest Canadian tech company.

Is this a more rare event than I thought?

Yes, grandfather comment is overly dramatic.

What you’re describing is the status quo termination agreement that companies require you sign in exchange for a severance payout. When people don’t have first-hand knowledge of unethical and dramatically harmful behavior by the company, and often even when they do, they sign it because they want the severance. What’s unusual in this case is not only did Sophie courageously speak up internally and now publicly, she selflessly forfeited her severance payout.

> she selflessly forfeited her severance payout.

Without discrediting her desire for the greater good. It is a lot easier to turn this kind of money done when you consider compensation in these big tech companies. To most $64k sounds like a lot, for a big tech employee that could be a fraction of one stock vesting event. She worked there for over a year so she at least saw one vesting event and Facebooks stock has been explosive over last year. She very well could have net $100k or more in a single vesting event. I'm sure she has enough money to tide her over until her next job.

It's pretty rare. Never have I ever met anyone who had or would sign such an agreement. Given my income bracket, I should have by now if it was merely uncommon. Then again, maybe not admitting to the agreement is part of the agreement. Insidious.

Worked in three companies and in all of them every single fired (as was the case here) individual was offered severance. Usually it's 2 months + 2 weeks per year of work but higher severances are routinely offered if company feels person can be a problem. 64k (4-5 months of pay) is essentially go away money - not FB trying to really hush somebody. Even baseless rejected EEOC complain will cost more to resolve.

I was not talking about "severance" per se, but about a non-disparaging agreements. I would agree that severance is typical.

That being said, California, Oregon, Washington State, Georgia, getting 4-5 month's pay as severance? That doesn't even make sense from a business perspective, imo.

If you're small enough you are face to facing investors, everyone has experience with volatile employees. If you're larger, you don't care about a single player anyway. That's just my experience.

Most (if not all) severance agreements in case of firing include non-disparagement. Severance is just cost of business - HR sees volatile employee and automatically offer severance with not-sue, not-disparage agreement. Employment lawyers are $600/hour and a huge distraction.


> Most (if not all) severance agreements in case of firing include non-disparagement.

I've never seen it. Employment lawyers are very happy to take cases against companies, who have money. Just like non-competes, they have been dropped from contracts over the last few decades (severability applies anyway) on the west coast.

As a personal anecdote in California, somebody I managed got 1 month of fake employment (technically employed but without access to anything) + 2 month severance after being fired after 8 months on the job without any push from me or fired employee. And he was extremely nice and non-confrontational, just could not do the job.

It’s not rare, it’s common. Especially in the world of social media where people can leave reviews, etc.

you write: this isn't the NSA or anything

They sort of ARE the NSA

NSA CIA are part of the fabric of Silicon Valley and especially companies like

Amazon, Google, Facebook

CONTRAST with Apple where as long as Steve Jobs was CEO Apple flat out refused to join any of the 'Patriot surveillance and stop terrorim by spying on American citizens and the entire world' coalition

Cognizant employees (contracted Facebook Moderators) are on camera admitting that they censor specific people based on their political leanings.

Facebook sent them specific memos about which violent images towards which politicians were not to be removed.

That particular set of facts escapes most coverage of this topic for obvious reasons.

These conversations about this topic will never be seen as sincere since they themselves are biased.

Facebook is trying to get attention worldwide by playing this master political role amongst other things, which is just a decoy because everyday it's losing real world relevance - less people use it, less and less.

Maybe in the US but in many of the countries listed, Facebook and its associated properties are synonymous with the internet.

I don't believe this to be true. It would only be true if you consider 'internet' can change meaning VERY FAST and almost inevitably. Because that's what happens, everywhere in the world: people get really excited about new tech and possibilities and explore everything about it, very fast. Facebook is trying to build itself in social/cultural fabric as some sort of institution of democracy or something like that, when in fact it's only a fad. This can change in almost a glimpse of an eye. Anyone who has taken the time to study platform strategy will know this.

Do any of the major platforms really have a handle on how to deal with these challenges? I'm not excusing the lack of oversight. But most companies that grow this quickly are a complete cluster inside. Imagine having to battle well funded state actors on top of trying to build a business.

Again, not saying Facebook shouldn't be held accountable. But it's always easy from the outside looking in.

I think your cause-effect is backwards. Moderation at scale does not work, and a social network does not have to be the all-out-for-clicks disinformation machine that Facebook is. The company chooses to make it that way because it is optimal for their bottom line; rage makes clicks, clicks make money. They already built the business, and they choose again and again not to change it. So, to your statement, it is not that the company accidentally grew to what it is today and now it is swamped with unsolvable problems; rather, the company solved a different problem and has no intention to solve the others (nor they can with this business model).

They likely will lose lots of money if they solve the problem, so they're not likely to do much about it unless forced to.

Twitter is kind of adequate - somewhat responsive, somewhat proactive, but primarily they have an API that's sufficiently open and accessible to anyone that social science researchers can identify and map the the networks and behavior of political actors.

Is Facebook really still trying to build a business? I'd say they're well beyond the startup phase.


Perhaps this is because the url has a .info gTLD coupled with a political domain name and is getting tripped up by some sort of spam filter. It sort of sounds like a shadow ban.

It's also possible that the domain was primarily spread by bots. There are countless possible reasons why any low quality domain could be blocked.

Every .info domain I've ever seen in my life has been a spam site. I didn't click the link because of that preconceived notion and opted to google it instead. And of course, this is the tagline:

> Creepy Uncle Joe Biden is back and ready to take a hands-on approach to America's problems!

Sure enough, it's a spam site.

Spam? Words have meaning. Spam is not normally defined as political commentary.

OK, you don't like the content. That's fine.

Did you get it in an unsolicited email, fax, text message, phone call, or other intrusive communication? Unless it did, it is not spam.

The video is worth watching. It's silent and short, probably animated GIF files, just showing actual stuff that has been on things like C-SPAN.

I wouldn’t point at two websites and explain that this alone is proof that fb is biased against conservatives. If anything, fb is biased towards the conservatives given policy actions by Joel Kaplan and a reluctance to take down misinformation by right wing outlets

Sounds like they need to increase severance packages

This is so melodramatic, and a bit pretentious to think your Facebook job has such an effect on the world.

That's a bit like saying that fox news has no effect within the US


This memo is from a member of the "fake engagement team", tasked with fighting fake accounts used for disinformation operations. The Rohingya issue was primarily a moderation issue; community and military leaders were issuing authentic calls for violence under their true name.

~~Can we get the actual report Zhang published, rather than a BuzzFeed link? I mean is Buzz Feed really considered news?~~


I retract my comment, I was unaware of the distinct nature of Buzzfeed news from Buzzfeed proper,

They mention this in the writeup: it’s not going to be made public (yet).

Also BuzzFeed News does real journalism, despite the BuzzFeed origin of their group.

I’ll concur buzzfeed news do excellent journalism and should be considered distinct.


From their Award page:

BuzzFeed News is a two-time finalist for the Pulitzer Prize: for a stunning probe that proved operatives with apparent ties to Vladimir Putin have engaged in a targeted killing campaign against his perceived enemies on British and American soil, and an exposé of a dispute-settlement process used by multinational corporations to undermine domestic regulations and gut environmental laws at the expense of poorer nations.

Our reporting has also won a George Polk Award, National Magazine Award, Livingston Award, Society of Editors Press Award, National Association of Black Journalists Award, National Association of Hispanic Journalists Award, Mirror Award, GLAAD Media Award, London Press Club Award....

Regarding publishing the entire memo they stated:

>BuzzFeed News is not publishing Zhang’s full memo because it contains personal information. This story includes full excerpts when possible to provide appropriate context.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact