Hacker News new | past | comments | ask | show | jobs | submit login
Facebook moderators break NDAs to expose working conditions (theverge.com)
1014 points by notinversed 28 days ago | hide | past | web | favorite | 590 comments

This was a valuable article to read.

Facebook is enormously valuable. They made something like $15B in net income in the last four quarters.

Content moderators are a necessary condition for that profit. If kiddie porn, gore and animal cruelty flooded the network, it would cease to be a destination visited by people that advertisers will pay to reach.

And yet, there are two sets of entry-level knowledge workers at Facebook: engineers ($150k/year, benefits, upward career trajectory) and content moderators ($30k/year, no benefits, likely going to acquire mental illnesses).

I understand the arguments about supply and demand of labour, but I'd have more respect for Facebook if they demonstrated awareness of this issue. The article talks about moderators re-evaluating the same piece of distressing content that they've already flagged. Why? I suspect because the moderator is cheap, and so Facebook isn't putting in the effort to ensure that every judgment needs to be made the minimum number of times.

More so than salary, I suspect Facebook considers the moderator cheap in terms of reputation risk. By outsourcing to contractors located offsite from main campus, engineers aren't thinking daily about the absolutely horrible stuff moderators are seeing, and so the one group doesn't impact Facebook's ability to hire engineers. This is a guess - can anyone at Facebook speak to whether engineers are aware of the working conditions of moderators, and agitate to improve their lot?

I've talked about this somewhere else. What you're seeing is just a manifestation of what I call "the fundamental problem of user created content." Said problem is that warehousing and distributing content scales insanely well but curation does not. Until we have strong AI, curation is a manual process. Moderators just aren't efficient enough at processing content for their output to pay for a full time salary. You can cut costs by making end users into moderators (the Reddit model) but results may vary.

This problem applies to other forms of sites and content as well. The app store gets hundreds of submissions in the game category per day. Hosting and distributing that content is easy. Only a tiny fraction of those games are going to be played and rated enough times to show up in a recommendation engine. The bulk of the incoming content stream isn't being matched to interested people at all (sorting by interesting is the same curation issue as filtering by offensive).

Literally every content platform is going to have some problem similar to Facebook. We can blame Facebook, but the reality is no one has a good solution. Not even me.

The solution is to legislate it be done properly which is a business expense. Where it is too expensive to be done properly, there is no business case to have a user created content platform. Being a platform is enormously lucrative, shirking the responsibilities to cut costs and make it even more so is evil. Really.

Could you describe some potential legislation you'd be happy with? How would you define "properly"? I'm especially interested in how this would work in places like the United States where there are strong free speech laws.

The detail of how best to regulate is a subject matter for experts, not me. In principle precisely the same way we have legislated against pumping sulphur dioxide out of a chimney in the atmosphere. Same rules for everyone. Now go and compete. You get experts on pollution, mitigation and economics to advise on what will work best for the outcomes you want to achieve. Similar domain experts are required here.

Now I have no doubt that facebrick and the evil-goog will try for regulatory capture and employ their vast army of Washington lobbyists in rent-seeking activities but that is a separate problem applicable in every industry where market failure means you have to regulate, not just this one. (And yes it does need solving, absolutely urgently). If you don't have to regulate because the market is working, great! Don't regulate! That's not what we have here, this is market failure but we haven't yet regulated. It will come because there are clearly externalities in this market, Well it will come unless we give up on democracy first. Something with which facebrick and evil-goog seem really comfortable as long as they supplant it.

>Said problem is that warehousing and distributing content scales insanely well but curation does not

I don't think it's a problem any more than weather is a "problem" that required the invention of the roof. As far as user-generated content goes, it's easy to invent the hose, but the faucet is much more complex.

It's irresponsible to provide only acceleration ("reach," "virality," etc.) for information without also implementing brakes. Moderation has been a known fix for decades, but people like Zuck -- in disposition and inexperience -- won't launch anything human-powered, and moderation requires a human eye. Fuck us, right?

I think you're view will change when you start putting numbers on it. I'm going to talk about the games in the app store example because it's the instance of the problem that I first thought about and it is easier to put a monetary value on an app than on a comment.

Let's suppose Apple wants to make sure every game in their store gets at least 5 ratings. To do this they are going to pay people to play each game for 5 min and then assign a rating. In an hour a curator can get through 12 games. Over an 8 hour day thats 96 games. Apple would need to hire about 7 (iirc) full time employees just to give every game one rating. We wanted every game to have 5 ratings. Assuming that users on average provide one rating (but sometimes 0), we still need curators to provide the other 4. That means Apple would be employing 28 full time employees just to get their content minimally curated.

Let's use the salary from the article and say each curator costs 30k a year. That means this whole department of curation is going to cost about 900k a year.

Most of what they are sorting is going to be shovelware anyway. To justify the cost of this department, the curators need to find gems, content that's really good but would have otherwise been looked over. They need to find 900k of gems per year. Considering the average earning of an app, this is just barely plausible. If you want to do anything more advanced with your curation, like have the curators generate key words or figure out which demographic might like it, your budget is blown for sure.

IDK how the economics of comment curation works out, but I bet it's even worse. After all, no one is even paying for comments, and making a comment is way less effort than making an app.

Why would they keep all the other numbers the same? They could raise the cost to list an app. They could charge extra for a "verified by Apple" designation. Charging monthly fees to be listed in the app store at all. Any of these would raise the bar on mere submissions to the app store, and beyond that supply a positive signal to the end user. There are plenty more options along these lines, too, but conventional business practice is to preserve the illusion that the internet is $free.

I don't care how much it costs, though, and I don't think Zuck cares either. If FB remains unmoderated it will invariably turn in to a shithole and people will go elsewhere rather than wading through a bunch of racist anti-vax cartoons just to find out where their friends are meeting for lunch next week. FB has plenty of money to pay as many human moderators as it would take to stop complaints, in all languages, but they won't. Where FB might go bankrupt if they don't implement brakes on reach, they'd just as soon prefer to go bankrupt rather than hire humans to do the work.

I don’t understand your argument at all. Is 900k/year to curate a store that’s on almost all of their products (or even one) supposed to be a lot for Apple? Seems like a rounding error to me. Not too mention that the real benefit isn’t from finding gems but from not allowing rotten content to spoil your platform. If they(FAANGs) can poach neuroscientists with 7 figure salaries like we saw yesterday, they can afford to have humans do moderation and not treat those moderators like garbage. Also they aren’t doing comment moderation, but content, mostly videos in the article it seemed. The fact of the matter is causing PTSD in your workers is an enormous (not to mention cruel) externality that Facebook absolutely needs to be held accountable for.

The confusion arises because I'm not at all talking about filtering rotten content (aka facebooks instance of the problem). I'm talking about sorting for good content, possibly identifying a niche hit in a stack of shovelware. On the surface these are very different tasks but they are both manifestations of what I'm calling the fundemental problem. The task of curating, that is identifying the content of incoming data to either sort or filter it, must still be done by hand and thus rarely if ever pays for itself. Sure 900k is a rounding error to Apple, but how much do they make off that 900k? And keep in mind the 900k figure was a bare minimum just to accomplish 5 ratings.

The issue is that if you want to identify content as "fun", "offensive", "dramatic", "tense", "crass" or any other of the myriad of labels that haven't been machine learned, very quickly the labor to generate these labels will cost more than the value generated by these labels.

Thanks, this makes much more sense. Typically we’ve used the market as a judge/pre-judge, but low inspiration me-too content in an opaque marketplace seems to disrupt this model significantly. It seems to me the value, especially for Apple, would be from having a better store UX. Tags that find users good stuff may not lead to Gems that you can pump to get more sales, but it might mean on-boarding a % more users per quarter to your service/ecosystem—especially over time as reputation builds. Conversely, if the App Store becomes too cluttered with garbage users may lose interest in the device or even ecosystem entirely if the quality of new apps they discover drops off dramatically due to a lack of tools/curation. An efficient curated store certainly improves the feeling of an experience versus one loaded with junk. Tagging isn’t trivial, however it can lead to a much better experience.

15 billion dollars in pure profit. You could pay a living wage with health benefits, or you could hire twice as many people and have everyone work less while paying more.

This is just Facebook being cheap.

Something something. Corporations are hive mind AI optimizing for growth and profit. They don’t give a shit. Amazon doesn’t give a shit unless it impacts the bottom line. Google barely has support and YouTube kids is full of questionable content.

The image of the Chernobyl liquidators just popped into my mind, each worker only allowed to shovel radiative shit for 90 seconds before having a mandatory time out. Treating content moderators in a similar way seems like a step into the right direction from a mental health perspective

The largest impact events will likely be the smaller percentage of posts that really resonate with the moderator. What would be best is the flexibility to take a break after reviewing those level of posts rather than simply periodically.

The most important thing for moderators would be group and individual therapy sessions run at work with hired therapists. Group therapy is great because you can relate to other moderators struggles which helps process, and individual to help process the most traumatic events. This is what Kik has implemented and seems to be effective.

It’s fundamental that some members of the public are problems. By prohibiting the content but keeping the people around, they are curating a culture where predators are allowed to roam on the platform as long as they don’t cross a specific line.

They do this because acknowledging and dealing with bad users would also mean impacting fake users that make them money with ad fraud, growth metrics and ginning up engagement.

End of the day, if you operate a social platform, and you have people uploading child pornography and animal torture video, you should do everything in your power to make the user go away, for good.

I don’t get the point of animal torture videos. The asking of meat consumed in the world is ludicrous. Eating meat is supporting animal torture in a way.

Just saying.

Eating meat is part of life, no one is torturing them. Torture is for a reason or for fun. There is a reason some animals are raised and used for food and its not to have fun torturing them.

I let you live your way, don't try to force others to live your way by using "scare tactics".

Well, certain practices common within the meat industry are torturous, regardless of whether they're intended for fun. "Food, Inc." & "Earthlings" are good resources.

> We can blame Facebook, but the reality is no one has a good solution.

We can take a step backwards and eliminate Facebook, Insta, etc from our societies and enact legislation that will govern the second generation of social networks.

What legislation? Specifically? What do you want to make law, that they have to hire moderators (which they already do?) or do you want to make it illegal to post "bad things" on the internet?

I would start with a few things:

- Make social networks mandated reporters for suspected child abuse. Like teachers or little league coaches, reporting would absolve them of liability.

- Prohibit generation of profit from illegal activity.

- Require transparency in political advertising, with an accountable entity and individual listed and publicly available.

- Require reporting of funding source and material individuals for paid political advertising.

- Make the platform liable for false product claims in cases where the platform facilitates direct or implied endorsements for a product. (“Jedi72 likes this homeopathic cancer treatment!”

- Make the platform responsible for reasonable efforts to block and report on actions against fictitious users posing as people. Share liability for any claims, including libel, resulting from the actions of fictitious people.

A regulatory regime like this would either improve the signal/noise ratio or cause the company to abandon or prohibit certain actions or functions.

Let me start by saying that I am not running for elected office because it's a really hard job; even though I might believe I would do it with more integrity and rationality than the next person, I'm also smart enough to realize that everyone thinks that way and I'd probably end up within a standard deviation of average corruption.

What I'm saying, seriously, is that we need a society-wide blameless post-mortem on social networks. It needs to be a slow, careful discussion, where all the stakeholders have their voices heard, and we decide what is good for us all. I don't know that we need them at all, I'm not sure they provide ANY value to ANYONE, but as a non-user I'd of course be open to persuasion by current victims. Erm, users. In the meantime, it should be illegal to operate a social network. Nobody needs to go to prison yet, even though I think it's horrifyingly clear at this point that the ethics are out the window.

One obvious one: advertising and public conversation must always be separated; the same way no public schoolteacher may read scripture in class, it should be illegal for an internet service that hosts public conversation (e.g. Twitter) to allow sponsored content.

We should also establish guidelines for addiction. After cigarettes, drugs, sugar, etc; we should as a society be prepared to understand the various forms that addictive products can take and regulate them aggressively before they become serious problems. UX patterns like infinite scroll, pull-to-refresh, and push notifications are particularly suspect.

I also think we should close the apparent loophole in COPPA that allows parents to post photos of their children on social networks.

> that they have to hire moderators (which they already do?) or do you want to make it illegal to post "bad things" on the internet?

I tend to come down on the "free speech" side of these issues as much as possible; I would prefer unregulated public fora that (perhaps by requiring identity verification) encouraged good-faith participation in substantial discussions. I think if you are just fooling around, maybe you should head down to the bar and get drunk with your friends and do your shit talking there.

> What I'm saying, seriously, is that we need a society-wide blameless post-mortem on social networks

Social networks aren't dead, so we can't have a post-mortem.

Also, you can't have a society-wide blameless analytical conversation about anything, especially if there are significant conflicting interests involved; that's not how people work. (It's perhaps noteworthy that even while suggesting this you offer no objective descriptions of impacts with reasoned analysis of contributing factors, instead jumping straight to blame and policy responses, with some labelled as “obvious”.)

>In the meantime, it should be illegal to operate a social network

So what about the countless number of people whose livelihoods depend on social media in some capacity? They're supposed to just be fine with being irrevocably fucked over until the moral panic about social media subsides? (or, in your words, "we decide what is good for us all")

Where does the small business owner who depends on a social network factor in as a "stakeholder"?

Drugs, sugar, cigarettes are measurably and objectively harmful to your health and that's why they are regulated (or should be).

There isn't similar comparable scientific evidence that social media is nearly as harmful except for questionable non-reproducible psychology studies, so I don't think it's comparable at all.

I hear you saying you don't believe social media is addictive. Will you please kindly open the "digital wellbeing" app (or whatever it's called on iOS) on your phone and share the number of hours you've used social networks over the last week?

I couldn't help but notice that your comment doesn't address the other parts of his argument, nor did you choose to respond to the earlier reply you received (by dragonwriter) outlining the legitimate flaws in the reasoning you provided with your initial opinion.

Social media has a lot of problems - even this article on just Facebook outlines a number of them [0] but I agree with the above poster in saying that you can't have a societal post-mortem analysis of the effects of social media given its very much _not dead_ state. Advocating that society just presses a 'shut down' button until "we collectively decide how we should proceed" is an entirely unrealistic scenario.

[0] https://en.wikipedia.org/wiki/Criticism_of_Facebook

> nor did you choose to respond to the earlier reply you received

HN's posting frequency limits made that choice for me. The system is unfortunately biased towards drive-by comments and against engaging with feedback on your own comments.


> Advocating that society just presses a 'shut down' button until "we collectively decide how we should proceed" is an entirely unrealistic scenario

But that's what I'm advocating. I don't think I'm obligated to respond to people who only came by to reject the premise of my argument. More interesting discussions are available to anyone who shows up with an open mind.

You claim to be for free speech, yet most of what you propose directly infringes on the concept. You don’t get to determine who gets to say what.

The point of free speech is that it’s not regulated according to an opinion about “what is good for us all” or what is “needed.”

Ah, yes, Congress — the most competent subject matter experts regarding the internet and technology.

Well, to do that you’d need a congress that has the first clue what social media even is, and I’m guessing the US is easily 5-10 years away from that.

How does that solve the problem that the incoming content stream flows faster than any curation mechanism we can economically build today?

Well, first you slow the content stream by imposing a rate limit per-user, and users who post things that violate the policy are stopped from posting for a longer time.

Second, you do better at automatically detecting duplicate content that’s already been banned.

"Strong" AI is a person, whom you also shouldn't mistreat. Let's hope they think the same about us.

Do you mean a bussiness solution or a solution to the "social problem" of people posting disgusting content?

I have no bussiness solution, but the solution to the latter is to simply ignore/block people who post stuff you don't like.

Facebook won't do anything to help these moderaters unless the cost of the negative PR exceeds the cost of doing something. Engineers at FB could help achieve this by causing agitation, for sure.

Maybe the best solution is for the moderaters to unionize.

If you can think of a practical way to unionize workers on a 1-2 year contract with a high turnover rate, I'd love to hear it.

The deck is stacked against employee organization. If I were conspiracy-minded, I might even say this was by design.

>If you can think of a practical way to unionize workers on a 1-2 year contract with a high turnover rate, I'd love to hear it.

Adopt the Danish model, make it mandatory for companies with workforces over a certain size to at least vote on unionisation. Will shift the default from not being unionised to being unionised except for cases were the workforce is actively convinced its detrimental.

I love that idea! But I have a hard time seeing a law like that being passed in the U.S., even in liberal California.

Side thought, things like this help me understand why the Danes are so darn happy.

They don't actually meet the legal requirements of contractors. In CA, anyway, they'd have to be full time employees since they do the same thing every day, full time, they are required to be on-prem during dictated hours, and the business relies on their work (e.g. they can never finish their project.)

The moderators are employees of the contracted organization (Cognizant, in this case). They’re not employees of FB.

Cognizant pays them hourly; they are referred to as "contractors" multiple times in the article, while the word "salary" does not appear.

Hourly employees are still employees. Salary v. Hourly is completely different distinction from employee/contractor (you can have an hourly or fixed monthly/annual contract rate, and you can get either model as an employee, though salary is more common if your pay and working conditions make you an FLSA exempt employee, since otherwise you might need to be treated as hourly for overtime and certain other things anyway.)

The union would be stronger if it included the engineers too, who have more leverage.

In my idealized universe FB engineers would have to rotate through moderation for 1 week per year. In my impossibly idealized universe the C suite would do the same.

Every single union in the planet was formed because workers wanted to improve their own wages and working conditions. Nobody organizes to protect other people’s interests. Why should engineers be the only rubes on the planet using their bargaining power to help other people instead of themselves?

> Every single union in the planet was formed because workers wanted to improve their own wages and working conditions. Nobody organizes to protect other people’s interests.

This overlooks the possibility that workers with different statuses can recognize commonality and, so, unionize as a coalition.

This is precisely what happened when the graduate students at the University of Virginia unionized with the University classified staff under UAW for health care and a living wage in 1996ish (? IIRC).

Workers can unionize across pay grades and professional classification if and when it suits such workers to do so.

I'm convinced that unionizing at this scale does not work. They'll always find cheaper people to do the grunt work which presumably is there only to teach the AI. Maybe when the AI is stable enough this AI "caretaker" position will take over and the burden of traumatic content will be alleviated. Maybe.

Up until then people should unionize. Do whatever it takes to create a workplace whichd by human eyes at all, or worst case just once.


Miner49er 10 hours ago [-]

Facebook won't do anything to help these moderaters unless the cost of the negative PR exceeds the cost of doing something. Engineers at FB could help achieve this by causing agitation, for sure.

Maybe the best solution is for the moderaters to unionize.


plankers 1 hour ago [-]

If you can think of a practical way to unionize workers on a 1-2 year contract with a high turnover rate, I'd love to hear it.

The deck is stacked against employee organizatio does not drain sanity from people.

Personally I understand browsing toxic content to foster "immunity" or at least familiarity. Doing it for money sounds like a lose-lose scenario. Call centers seem to sound better given the conditions.

...which it's unlikely to happen because of the NDA's they have to sign.

If there's something wrong with this reasoning, please elaborate.

> ...which it's unlikely to happen because of the NDA's they have to sign.

> If there's something wrong with this reasoning, please elaborate.

I would be utterly surprised if an NDA could be used to prevent an employee from exercising their right to organize (in the US at least). Just because you sign something does not make it legally enforceable.

Yeah - but how would they know what is legally enforceable or not?

It's really complicated and I don't think it's reasonable to assume people have access to a clear answer on enforceability and certainly not an answer they could rely upon enough to risk their job over,

I agree the whole thing is complicated and especially hard to navigate without guidance. I was merely responding to parent asking to validate their reasoning.

I don't understand how this isn't a problem perfectly solved by the market. If it's a shit job that isn't worth the pay, people won't do it. If it's an okay job that isn't really worth the pay, they will have high turnover. High turnover has a cost, maybe it's worth it to them. If the turnover is so high or the job is so undesirable, maybe they'll axe the role. If the role is so important to the business that it can't be axed, but people aren't taking the jobs, they'll have to pay more for it. If they can't pay more for it because it cuts too much into profitability, but it's necessary for the company to operate, then they are no longer a viable company in the market. Or they'll have to innovate or adapt. All possibilities are fully encapsulated in very simple dynamics.

>> If it's a shit job that isn't worth the pay, people won't do it.

That assumes perfect knowledge. As mentioned in the article, these moderators often do not understand what they are getting into until long after they have committed to the job. Once you have quit a previous job, moved, setup your new life, only then do you realize your new "brand consultant" job means watching kittens die all day.

Jobs like this also open persons to realities that we normal keep tucked away deep in our brains. We all know that cops don't investigate most reported crimes, but that doesn't come up very often for us. A facebook moderator sees the crimes, sees the lack of response by the police or anyone else, every minute. Repeatedly, every minute. Nobody can appreciate what this does to ones' mental state until you have been there.

Nobody moves for $30k a year.

Are you serious? I know people that have moved for less. Making absolute statements like that doesn't do much for anyone.

Lol. Ever seen a migrant camp? Ever talked to a seasonal farm laborer? People will literally walk across continents for 30k/years.


Sounds like $30k is an improvement in their lives then, so is it really "too little"?

The word is "exploitive". Desperate people will do things because they have to. If I pay a homeless person $20 to humiliate himself, is that a fair transaction, or an exploitation of the asymmetry between our conditions?

To the chubby engineer type in 2019, yes. Make the drunk dwarf dance, it’s a free market.

When recession strikes and the gravy train ends, the same folks will be wearing Che Guevara T-shirts crying about unfairness.

There is no obligation for anyone to do anything. Do you assume that there is no free will and we are all like robots driven by short term profits?

That doesn't answer the parent's question.

If you call 20$, for a person who owes nothing and is probably starving, a "short term profit", you've very likely never experienced how is it to live in poor conditions.

It is very possible to "improve" somebody's life and still be abusing your power over them and causing them damage in other ways.

> Sounds like $30k is an improvement in their lives then, so is it really "too little"?

Being put in prison is an improvement for many individuals in so far as they are guaranteed three square meals and a warm place to lay their head. Would you seriously argue that that's an improvement that is worth the cost to the individual?


That's really absurd to say, do you realize that $30k/yr is in the top 4% globally?


> Nobody moves for $30k a year.

I personally know people who have moved for half that amount.

"And the Union workhouses." demanded Scrooge. "Are they still in operation?" "Both very busy, sir..." "Those who are badly off must go there." "Many can't go there; and many would rather die." "If they would rather die," said Scrooge, "they had better do it, and decrease the surplus population."

C. Dickens: A Christmas Carol. In Prose. Being a Ghost Story of Christmas, 1843

That literally describes the problem though. There are too many people that can do that job, so there's a surplus supply for it to be valuable to the market. It isn't the responsibilities of the Scrooges and the Facebooks to account for the fact that there isn't enough demand for that position for it to pay well. If people are unable to find work, or are otherwise unable to have enough stability so as to avoid exploitative work, that becomes a societal responsibility. Don't tell Facebook what that job should be worth, or what it should entail, that doesn't make any sense. Just like minimum wage laws, they don't make any sense.

If we're legitimately talking about situations where the balance of power is so unfavorable to the unemployed individual, the solution is to start talking about UBI, not micromanaging very specific roles at very specific companies.


edit: if it was not clear, facebook is in fact part of society (by definition) and their responsibility is proportional to their role in society, no more no less.

Yes, and taxes + UBI would represent that proportion. This meme is also only ridiculous because of the $100 figure. What if someone paid you $20k to fuck off. $100k to fuck off?

Sorry for the discourse, my polisci past is flaring up but I do in fact explain why I support this (silly) tweet.

There are some kinds of nonviolent behavior we don't allow because we (I guess we is majority of society?) think it's wrong, often around issues of financial exploitation. For example, even if you are mentally sound you can't sell yourself into indentured servitude because that tends to only happen when someone with power exploits someone desperate, often (but not always) due to reasons out of their control.

UBI is basically an artificial wage. Like a lot of things in life, a wage is a negotiation between two entities of vastly different levels of power. As every public company proudly proclaims, the value of what employees do for them is far far less then the value the employees receive.

Why do you think those workers accept this fact? They like getting less value than they generated? It's clearly the result of an exploitative power dynamic - one side has to work or not eat, the other does not.

If you think someone should not have medicine and food dangled over their head in work negations, the answer is workers need to be paid in real value of the capital that they create and that needs to law.

I hope one day our wages are looked back on like no minimum wage, the 6 day 14+ hr work day, child labor, indentured servitude, slavery and etc which were often viewed as normal at their time but now disgusting.

This GIF is mocking the idea of UBI and while for sure a silly meme, has an element of truth IMO.

Why would it need to be law? Since, in your view, capital / management / entrepreneurship contribute nothing of value, workers should have no problem creating successful businesses without them. Particularly if UBI takes care of basic needs during the early stages.

Why do feel the need to misrepresent my argument? When did I say that management contributes nothing of value? Of all the arguments I was expecting, this one surprises me as it makes no sense.

Capitalism works on leveraging advantage to pay workers less than the value that they create. Do you have anything to say about that, or are you going to continue to suggest magic discount coupons (UBI) which are worth less than the value a worker creates are something besides slavery with extra steps?

Feel free to scoff/mock me for questioning sacred holy capitalism - when immoral ideals are questioned and no logical answer is available, history tells us emotion and feelings are the standard responses.

A thing is worth its market-clearing price. Facebook has $15B in revenue and pays $12B in expenses (for the sake or argument, it's all payroll). I claim the workers are getting exactly what they're worth, and the remaining $3B is the value contributed by capital/management. Your claim seems to be that workers created the full $15B in value, and the $3B is being stolen from them. Apologies if you meant something else.

If that's true, Facebook's workforce could quit tomorrow, go build a competitor, and claim the full $15B for themselves. Why don't they do that?

The traditional Communist argument has been that they can't because capital owns the means of production. Factory workers would happily go start their own factory, but they can't afford the machines to fill it. That argument doesn't work too well here: the means of production needed to build Facebook are accessible even to college students. Basically an editor, a scripting language, a web server, and some linux boxes.

Another reasonable argument is that they can't because they need to eat. The time it would take between quitting their jobs at Facebook, Inc. and seeing revenue from Facebook Co-Op would exhaust their savings. That is a problem UBI can solve. It's not that UBI pays what you are worth, but that it removes the urgency that sometimes prevents people from getting their full value in the market.

This idea - that workers create all the value in a company, not just the subset they are paid for - is testable. If you're sure it's true and you want everyone else to believe it, then I think you'd want to run the experiment. UBI could be a way of doing that. Could also do grants that pay living expenses during entrepreneurial activity, or just straight-up house and feed co-op founders without asking for an equity stake.

Do you know what a Victorian-era workhouse is?

If someone suffered more than you, your suffering is invalid - is that your argument? Trying to understand what you are arguing.

Workhouses were the UBI of the era. Sort of a combo welfare and prison.

Thanks for posting this. There are people who do not get it.

> If it's a shit job that isn't worth the pay, people won't do it.

That is not how the labor market works... At all.

"Worth the pay" is overridden by "absolutely need this money for me or my kids to survive"

I responded to another poster. Desperation (debatable regarding whether or not this is possible in the United States) breaks markets, sure, but that doesn't mean that they suddenly need to be micromanaged. I have long been in favor of UBI in order to remove desperation from the table. I'm not, however, in favor of micromanaging roles at companies and arbitrarily determining what they're worth.

I've seen desperation in Australia, and our social welfare is head and shoulders above the US. US$30,000 would revolutionise many peoples lives. They would put up with a lot of crap to get that type of money.

> debatable regarding whether or not this is possible in the United States

Wha...? Are you really that far removed from reality?

>> If it's a shit job that isn't worth the pay, people won't do it.

Tell that to the people who pick your fruits and veggies.

I think the same about Uber and the drama with drivers. If the work is so bad, pay so terrible, riders don’t want to pay more, then why hasn’t the market killed itself?

Now we have gov externalities trying to legislate something that shouldn’t have existed?

I’m not saying any of this is write or wrong, I’m just at a loss how people get mad at Uber when the model shouldn’t have made it. Where did the market dynamics go wrong here?

It will, but it will take a few years.

The issue with Uber is they pay $0.70/mi for a task where the cost of fulfilling the task is $1.00. The drivers do it because they can get quick cash with no friction.

You can already see the system breaking down. Uber XLs are getting older and shittier. 2012 Dodge Caravan that picked me up at the airport will be dead in a year. As the situation goes on, someone will die in a fiery wreck by an Uber with no brakes, and it will gather the national attention and kill the company.

I always thought that the car age requirement was dumb. I drove a 2007 Subaru and it still worked perfectly. Another person I knew drove a 1997 BMW that's better than most Ubers I take.

I drive a 2003 Honda. It’s a great car in great shape. If I used in as an urban livery vehicle, it wouldn’t be in great shape.

  I don't understand how this
  isn't a problem perfectly
  solved by the market.
If I offer a homeless person $30 to let me spit on their face and they accept, the market has worked perfectly - but some people will still think I'm an asshole for spitting on that homeless person.

Below a certain level, people don't have any leverage at all.

Most people depend on their meagre income quite greatly, it's a bold thing to say someone could just 'quit' their job and spend 9 months looking for another one, especially if they don't have some pretty hard skills.

So in the realm of talent, yes, it would solve itself, which is why devs et. al. often get 'free lunch' among other things.

For most working people, it's much harder.

"All possibilities are fully encapsulated in very simple dynamics."

Yes, of course, but the incredible power asymmetry between FB and not hugely skilled workers is the dynamic that drives all of this.

anyone who has done moderation for any active forum can tell you how nightmarish it can be. the more popular the platform the more problems you can have.

anecdotal, friend who streams on twitch also mods for a few streamers and they even have issues when a stream is in subscriber mode only. simply because anonymity and distance from those being affected empower people to do bad things

edit : I am really surprised there is/are not companies which would spring up to provide these services seeing how most of these activities are required by law.

however before bemoaning what they are paid, just go look at your local 911 operators who are government employees. just because we think it should be paid more doesn't mean others do.

Anonymity is definitely empowerment (especially because Twitch has no effective ban system, so bad actors will create dozens of accounts and keep coming back), but even people acting under their real names will do horrible things. We've seen this with real names on FB/Google+.

It seems like there are no easy solutions for this and it's really frustrating: Societal norms are just completely shredded right now.

> but even people acting under their real names will do horrible things. We've seen this with real names on FB/Google+.

And on twitter. Their most famous user uses his real name.

Given the description of the videos in the article, I'm not sure that merely paying the moderators more would solve it.

I think for most people that I'd want to be moderating content (e.g. not sociopaths) making them richer isn't going to do anything to deal with the real reason this is worse than a call centre - the content itself. I'd rather see facebook put more effort into protecting moderators from the content they're viewing. I realise this is a non-trivial problem, but here's a few ideas of things they could do which may help:

* Once a video is confirmed bad, fingerprint it and block future uploads.

* Provide colour-inversion buttons to help reduce visual impact

* Rotate people between areas, so they aren't looking at the same class of bad content constantly

* Use ML training to detect which content is likely to be the most severe and for this content, reduce minimum viewing interval to ~ 5 seconds, and ensure that a single individual doesn't get too many of these.

* Flag accounts who post bad content too often, and shadow-ban them so their posts are not visible by anyone else (this could go in stages, starting off by blocking re-shares of their posts or something)

> Flag accounts who post bad content too often

Too often? If we're talking about content that is traumatic to view (extreme violence, child porn, etc) then I think one time is enough for a flag, no?

That said, I do agree that shadow banning is the best option. Let them spin their wheels for a few weeks posting content before they realize no one is seeing it.

>* Provide colour-inversion buttons to help reduce visual impact

To add to this, some degree of pixelation would help without impacting the accuracy of assessments.

Provide colour-inversion buttons to help reduce visual impact

I was thinking also about blurring the picture/video.

They could have levels, and use moderator consensus to remove the pictures

1. Blurred+Mosaic 2. Contour lines only 3. Windowed/Striped (only showing like 10-25% of the photo)

If 2/3 of the moderators above agree that it's a bad photo, it moves onto stage 4:

4. Color filters - Break the photo down into k-means color clusters, moderators choose which colors to see. They can see 100% of the colors, or one, or two, or however many they need.

If #4 agrees, the photo goes into the round folder.

No need for anyone to ever see the whole photo. It'll be pretty obvious that something was violating the TOS without anyone being able identify anything specific about the photo.

We're not talking about 'beyond the shadow of a doubt' here, we're talking about 'yeah, that's probably not ok.'

Stuff like this should be brought in-house. You can't outsource responsibility. Engineering and AI/pattern matching should be in the loop as a key ingredient so that a terrible image/video won't have to be viewed by human eyes at all, or worst case just once.

I would agree. Having to watch some of this stuff is a great incentive to make the AI better.

I thought the same thing -- that once content is moderated, it should be trivial to block exactly the same content from being posted and moderated again. What a waste.

The problem is actual use and context sensitivity. Since if they let historic signifiant images get flat banned they also look like big bad idiots trying to memory hole history for blocking say death camp images while still having to ban asshats who post it and say "Not enough."

Neither the text nor the image themselves would be ban worthy but in combination it is boot-worthy. Extrapolating has Scunthorpe problem issues.

The content may get modified quite a bit if censored though. A good example was the ChristChurch video. 4chan members modified it so many times, there are probably 4k-5k different iterations of it. NZ made a big deal about blocking it and arresting people for viewing it, making the demand for it significantly higher. There would have to be some really efficient and highly optimized ML to group all the variations together with a common label. AFAIK this does not exist for videos yet.

The only partial mitigation that comes to mind would be if a small circle of your friends had to "approve" your video before people outside your circle could see it. That would put them on the hook for sharing bad things too. That might even reduce the psychological load on the moderators.

> cheap in terms of reputation risk

Also in terms of the reputation of Facebook: any issue with moderation failures and Facebook can easily says that it's the contractor's fault

Re: salaries, don't forget that there's also salaries working on the content moderation problem - taking the huge datasets that the content moderators are reviewing and applying machine learning and content ID and I have no clue what else more to automatically filter out a lot of the content, ideally with such a high certainty rate that the moderators don't see it in the first place.

I want to believe the content moderators only end up seeing the 1% of content that is dubious, or brand new content (not published and hashed earlier). I don't understand in that regard why the one video mentioned in the article (about the iguana) wasn't marked as "ok" in the systems, or marked as "on hold for law enforcement" so the moderators didn't have to see it again.

> Can anyone at Facebook speak to whether engineers are aware of the working conditions of moderators, and agitate to improve their lot?

They are passively aware but mostly don't care. The engineers who express frustration are in the vast minority. Most engineers are just super happy about their own situations, very focused on PSCs, and try to keep to their own stuff. Any internal frustrations are channelled towards appropriate feedback channels where the anger is watered down.

Nobody's making waves when everybody is focused on their own struggle. Gamifying for a docile workforce.


From a quick Google, "Performance Summary Cycle" - presumably regular reviews and bonuses.

> Facebook is enormously valuable. They made something like $15B in net income in the last four quarters.

Their profit is not proof they are valuable, because they are willfully abdicating responsibility for the damage they cause, and, further, involving themselves in government debates over their own regulation.

Advertising businesses are inherently suspect, and when one is addictive and potentially enabling election fraud etc, we need to be extra careful in declaring its value.

Anyone participating in the debate should have to disclose if they are a daily user or if their company advertises on any social network. (I'm clean)

Edit: my opinion of course is that FB, Insta, etc (any advertising product with push notifications and/or infinite scroll) ought to be illegal. In 50 or 100 years we'll all recognize that social networks are like internet cigarettes.

> my opinion of course is that FB, Insta, etc (any advertising product with push notifications and/or infinite scroll) ought to be illegal. In 50 or 100 years we'll all recognize that social networks are like internet cigarettes.

Well cigarettes are still not illegal, so we have some ways to go...

This was an earlier comment that disappeared. Frankly I ask myself consistently "what net value does Facebook add to the world?" other than being a 100% optional vice.

$30k/year, no benefits, likely going to acquire mental illnesses

Why do you think they don’t get benefits? Facebook contracts out to another company. The company they contract with still is probably providing some benefits

That's true, the article says Cognizant offers "a 24/7 hotline, full healthcare benefits, and other wellness programs". But at the same time, it quotes the on-site counsellor as having said "I don’t really know how to help you guys" when an employee went to him for help.

This leads me to believe that there is a qualitative difference in the calibre of benefits Cognizant provides versus those that Facebook provides.

Further, since the job has high turnover and can be expected to put a high toll on an employee's wellbeing, it's also important to me to think about what happens to their benefits if/when they quit. In the US model of employer-linked healthcare, this tends to result in losing your benefits when you need them most. By contrast, other organizations that do similar moderation jobs seem to offer psychological health care treatment even after the employee leaves the company. See https://www.theguardian.com/news/2017/may/25/facebook-modera... for more info.

> I understand the arguments about supply and demand of labour, but I'd have more respect for Facebook if they demonstrated awareness of this issue.

It is a very politically correct statement, which, as it usually happens with politically correct statements, really says nothing. I mean, what exactly do you propose they should do about it?

I am the last person to defend Facebook, but, seriously, they are running a business. That is, essentially, finding a way to do/own/produce some stuff while spending less on it than you receive for it. And cost management is all about supply and demand.

After all, it's not like Facebook wants to pay engineers $150k/year and for them to be happy. It needs good engineers and the only way to get them is to keep them happy, or else they stop coming. Engineers are not really special, they are not even exactly more necessary than cleaners, and loaders, and content moderators. The only difference is, demand for them is higher than supply.

And that would be tragic in a sense, if everyone would be randomly assigned a role and if you draw an "engineer" you are lucky, but if you draw a "content moderator" you are not. Or if we were talking about some factory in some giant village in Vietnam, where no matter how smart you are and no matter what you do, you'll be working much harder job in conditions far, far worse than described for $500/year and you really don't have any choice.

But we are not. No, this is Florida, $30K/year and these are people choosing the job of content moderator over a job of an engineer, either because they don't want to or cannot do otherwise. They while many of them may have stories that can be seen as tragic (because who hasn't, really?), they are no more victims than anyone else born into living a life on the Earth. And thus, hardly entitled to anything.

Let me be clear: I really hate Facebook and I find companies with a modus operandi like Congizant's distasteful to say the least. And I can see how Cognizant can be held accountable for giving somebody a false hope, luring into doing the job while promising something else. But what exactly do you want for Facebook to do? They want a service as cheap as possible, they get asked a given price, they agree. That's it. It won't save their reputation after an article like this (because, well, public relations...), but it seems just fair.

>This is a guess - can anyone at Facebook speak to whether engineers are aware of the working conditions of moderators, and agitate to improve their lot?

If you go work at Facebook as a engineer chances are you aren't too bothered by ethical questions

It says right in the article that the content moderators have full medical benefits

Sorry, yes, I could have expanded on that. You're right, they provide health benefits, in theory.

My experience with the US health care system is that there is a wide gulf in health benefits. When I worked for Microsoft in the late 2000s(as an employee, not as "dash trash", as contractors were called), I paid some very small premium like $10-20/month for a plan with 0 deductible, 0 copay, and pretty much every doctor was in network and covered at 100%. When I dislocated my shoulder, some orthopedic surgeon who used to consult for an NBA team looked at it for me.

I understand that that was considered quite good insurance and that most people in the USA don't get that level of care, much less at that price.

If you read Glassdoor reviews of Cognizant, people complain about the insurance offerings available to employees. https://www.homebaseiowa.gov/sites/homebaseiowa.gov/files/do... looks like it might be what they offered 2 years ago. It doesn't look very good to me. If I was earning $15/hour, I think I'd pretty much buy the cheapest plan and only use it if I was actively bleeding to death.

I'll also link to https://www.theguardian.com/news/2017/may/25/facebook-modera... again - it describes how other companies that moderate distressing content approach this. It includes the company paying for psychological treatment, including after the employee leaves the company.

>I understand that that was considered quite good insurance and that most people in the USA don't get that level of care, much less at that price.

AFAIK, Microsoft 'self-insures', in the sense that they simply pay the medical bills for their employees.

It's knowledge workers vs unskilled labour.

There are enough rules that I'd say it still qualify as knowledge work (not as much as developer, but still some level of knowledge):

-Public (https://www.facebook.com/communitystandards) and private guidelines

- A "Known Question" document (15k words)

- Facebook internal Workplace, with real-time (and sometime contradictory) decisions, for breaking news events

from https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...

It's also people not being paid nearly the value they produce for an organization. Everything is always a race to the bottom for us

I was thinking about this more but you’re right. I doubt there’s any acquired knowledge needed to know what’s repugnant to the human mind.

I don't have first-hand experience with the job, but I've read a few articles about these sort of teams, as they're widely deployed by all the large Internet companies.

In my view, it's knowledge work. Sure, it's knowledge work that almost anyone can be trained to do, but there's a skill set required.

For example, moderators need to make judgment calls that distinguish between offensive uses and artistic/journalistic/parody uses. There are many cultures and in-groups on Earth, which makes this harder than it might seem at first blush. Not all nudity is obscenity, not all hate speech is blatant.

> For example, moderators need to make judgment calls that distinguish between offensive uses and artistic/journalistic/parody uses. There are many cultures and in-groups on Earth, which makes this harder than it might seem at first blush. Not all nudity is obscenity, not all hate speech is blatant.

In these cases I'm not sure it really matters. Over-moderation is going to win overall and I don't think moderators are going to get into that much trouble for being slightly over-zealous. I could be wrong but I feel like after a week of being on the job you'd get the hang of it.

From the article (and the others on the subject linked at the end of the article), there are many rules, many exception and edges cases and even mandates for current events (for example, there's been a shooting 15 min ago, do you ban or keep the videos?); in addition to judging if content is infringing the rules, the moderator has to select the right rule being broken ("Do I delete it for terrorism? Do I delete it for graphic violence? Do I delete it for supporting a terrorist organisation" from https://www.irishtimes.com/culture/tv-radio-web/facebook-s-d...).

Some of the decisions taken by the moderators are then reviewed by a "super"-moderator and the match between the two decisions (including the rule chosen) is used as a performance indicator, which has to be above 90% to avoid troubles.

It's not "unskilled" so much as (a) it can be learned on the job and (b) people don't see why they should pay more for top talent.

The second one is pretty important. Consider that sales, sports, and entertainment work similarly, where you can make a case that you should pay more to get the best and workers (or their agents) can use this to good effect.

I think "unskilled" and "can be learned on the job" are usually considered equivalent. Nearly anything you need to hire a human to do requires some kind of learning, at least to do "well" or "our way".

No, it's market forces. If the world was overflowing with employment opportunities, then unskilled labor would leave unpleasant jobs, and employers would have to pay them as much as it takes for someone to put up with the unpleasantness.

US unemployment rates are lower than they have been in more than a decade. There are more jobs than applicants.

It could be people prefer working at a desk and a screen in an air conditioned office to, say, schlepping boxes in a warehouse.

> It could be people prefer working at a desk and a screen in an air conditioned office to, say, schlepping boxes in a warehouse.

And judging from the article, agencies hiring content moderators are going to do more to paint the work more colourful and being a career-builder, by outright lying about what contractors will actually do day-to-day.

Of course that’s going to look more attractive at face-value than being pitched a warehouse job by the foreman.

There is no penalty to quitting if the workers are unhappy with the work or feel it was misrepresented to them, and the warehouse jobs are still there waiting.

The penalty is the amount of time you have to spend looking for a new job, and doing it while you're still working your soul-sucking current job.

for the record the article states that cognizant, the contractor that FB deals w/for content moderation, does provide health benefits:

> Cognizant also offers a 24/7 hotline, full healthcare benefits, and other wellness programs.

I think all companies with user generated content should require every employee to moderate content for some amount of time. 1 hour per quarter would be plenty.

How can kiddie porn, gore and animal cruelty "flood the network" for people using Facebook? Don't you normally just see things from your friends, ads, and groups that you are a part of? If they post that, either call the cops, block them on Facebook, don't hang out with them irl, or leave the group as appropriate. Problem solved? I haven't used Facebook in 5+ years so maybe this is no longer accurate.

As you may have guessed from the downvotes, this is not even close to accurate. It should have been possible for you to realize this was inaccurate simply from how trivial the problem looked to you in comparison to how much effort is being spent on it.

I genuinely don't understand how an average Facebook user would encounter videos like those described in the article. Is it some recommender system gone awry?

As far as I know, mostly because people post them as comments on public or widely shared items, along with "get free sunglasses" and every other kind of spam. There are tons of these public posts from pages that do news, memes, politics, trolling, regular advertising, whatever.

How about training a classifier to detect and remove potentially bad stuff, then let the users who uploaded it argue that the classifier made a mistake? Only in those cases have human moderators look at it?

They already have classifiers, of course. It's not like the moderators go through everything posted to FB–you'd need millions of people to do that. What the mods actually do, presumably, is review content reported by users.

Then perhaps they need better classifiers? I don’t remember seeing any papers from FB about that, so it does not seem like a priority for them. Or just make the existing bad classifiers more aggressive and shift the moderator’s effort to reviewing good content.

"If kiddie porn, gore and animal cruelty flooded the network, it would cease to be a destination visited by people that advertisers will pay to reach."

I bet this can be done with ML and IA instead human power. Yet, FB uses their human cattle to censor conservative voices.

> I bet this can be done with ML and IA instead human power.

IA and ML work in bulk. But they misclassify things that a human easily can discern.

Think Tesla VS Google. Google wants a self-driving car, and that is not working yet. Tesla wants a car that helps you to drive and that is good enough.

It's order of magnitude easier to be almost always correct that to be correct all the time. And you only need one kiddie porn video in your Facebook feed to start to worry about Facebook.

It is matter of time this will happen. Even with self-driving cars. Using people to do this is just killing people. Think all the mental effort that has to be done to censor. Humans aren't that perfect and cannot oversee the entire FB daily activity.

Good luck! There's at least a billion dollars in it for you if you figure out how to do this.

150k a year in Silicon Valley is similar to 30k a year in the midwest...

Where are the moderators living ?

This is very much an answered-in-the-article question.


>> If kiddie porn, gore and animal cruelty flooded the network, it would cease to be a destination visited by people that advertisers will pay to reach.

Or the opposite. People flock to the internet to see this stuff, the material that isn't on TV or even dedicated commercial websites. They get to view it in the privacy of their homes, a secret between them an their trusted "friends" on facebook. Facebook knows that if it really cracks down, if cops bust down doors over this stuff, that people will go elsewhere to share horrible material. So facebook does a poor job of moderation, one just good enough to avoid new regulation but not so good as to actually make a difference.

Advertisers care about eyeballs. They may claim to not want to be associated with X or Y content, but in reality they would rather be in front of an audience than not.

Advertisers do care a lot about where their ads are displayed.

There's a whole section of adtech dedicated to brand safety.

I believe it's over the top[1] but it's something marketing folks do care about.

Also the good old pop up ads were created for that reason, aeons ago.

[[1] https://www.campaignlive.co.uk/article/advertisers-embrace-n...

Guy Rosen and other execs within Integrity team continually skirt their responsibilities here. They claim they're doing better, but the second-order effects of crappy work conditions and demands keep cropping up. Zuck says one day we will hopefully be able to AI-away this integrity work (especially the most traumatizing), but he does not say a whisper as to improving working conditions or pay while the work needs to be done by humans. And I bet Zuck wouldn't be able to handle the content that these people have to view. Sheryl does not care. She keeps referencing the same standard schpiel about how contracting companies have to abide by a strict set of standards, and how they're ahead of the market in terms of pay and wellbeing. But it's still awful. The divide between contractors and full-time workers at Facebook is truly disgusting.

People who work at Facebook should be pushing for change. But they're numb to the schpiel. They're cushy and looked after and don't want to create a fuss.

Rosen doesn't care. Zuck doesn't care. Sheryl doesn't care. What DO they care about? Perception. Sit in any high-up integrity meeting and you'll see the only thing they seem to talk about is how "x" would be received by users at scale. There's no comment as to the ethics or corporate responsibility. You can be talking about something pretty out there like how human rights intersect with takedown decisions and all you've got is a bunch of people umming-and-ahhing about lossy metrics and how Zuck wants this or that so we better hurry up. Or how awwesome it'll look on our PSCs if we ship this thing.

Broken company.

You're right, Facebook is a broken company. Along that point, we should break it up.

I agree and in a more enlightened future I hope we can assess companies like this on their net benefit to society and apply penalties when they act in a way that negatively affects society.

Break it up or just stop using it?

"Conditions at the Phoenix site have not improved significantly since I visited. Last week, some employees were sent home after an infestation of bed bugs was discovered in the office — the second time bed bugs have been found there this year. Employees who contacted me worried that the infestation would spread to their own homes, and said managers told them Cognizant would not pay to clean their homes."

This is utterly nightmarish, given how costly to one's life bedbugs are. (clearing out a home, including the replacing of all mattesses/couches, and bagging or hot-cleaning all clothing, sheets, towels, rugs...)

"A manager saw that she was not feeling well, and brought a trash can to her desk so she could vomit in it. So she did."

This particular manager put their employees in danger of catching illness, especially given what appears to be the open office floorplan where airborne sicknesses can travel the entire room. I'm shocked and apalled, and this is the stuff I'm comfortable quoting from the article to be shocked and apalled by. The other stuff has convinced me to help inform friends and family to get off facebook rather than passively clean myself of it only.

This is industry standard in businesses with limited sick day or leave policies. Especially in call center type environments.

Once an employee has used their allotment, they receive a write up if they take off again no matter how ill. Too many write ups and you're fired. Managers have no discretion in this process, so they can only mitigate the impact by doing something like bringing the trash can over or buying hospital masks.

Not saying it’s acceptable! But this isn’t a facebook problem specifically.

A local healthcare system has a similar policy: PTO (paid time off) is not differentiated between sick days vs. other days off. They don't offer any sort of maternity/paternity leave, aside from what the US Family Medical Leave Act (FMLA) requires, which is six weeks unpaid time off. Even worse, they require employees to burn up all of their accrued PTO days before taking any unpaid FMLA time. So you have new parents returning to work sleep-deprived, picking up new viruses from the petri dish that is their kid's new daycare, with no time off. And most of these workers are HEALTHCARE PROVIDERS.

How stupid are we as a nation that we don't mandate more humane/non-stupid policies, even for people in environments where coming to work unhealthy can often result in death?

> How stupid are we as a nation that we don't mandate more humane/non-stupid policies, even for people in environments where coming to work unhealthy can often result in death?

This is one of the rare times I believe malice trumps stupidity. Our healthcare industry generates billions and some of that money is being spent lobbying to keep things just as they are.

I don’t think I believe that healthcare companies lobby to prevent office condition reform from happening

FMLA is 12 weeks unpaid.

Unless your spouse works at the same company. :D

>If a father and mother have the same employer, they must share their leave, in effect halving each person's rights, if the employer so chooses.


> Even worse, they require employees to burn up all of their accrued PTO days before taking any unpaid FMLA time.

everything else you mentioned seems pretty bad, but what is so terrible about this? unless you are trying to keep your PTO buffer high so you can cash out on a higher salary when you leave, it doesn't seem to make much difference if you were going to need to take the unpaid leave anyway.

> everything else you mentioned seems pretty bad, but what is so terrible about this?

Because PTO (or even pure sick leave) has less restrictive usage conditions than FMLA, it means that anyone who uses needs FMLA for a longer-term issue like a new child will also exhaust all the leave they have available for short-term personal or family illness or need for stress relief and be unable to use that until they have reaccrued it.

This is a legal but extremely employee-hostile policy that contributes to bad working conditions for everyone (not just the employees that need FMLA.)

Yep. FMLA can't be used for a routine illness—only a "serious health condition"[1]. But it's better for everyone—especially vulnerable populations like hospital patients—for people with less-severe illnesses to stay home. A routine cold can be fatal for a person undergoing chemotherapy.

[1] https://www.dol.gov/whd/fmla/employeeguide.pdf

I think the most pertinent question is why don't they quit?

It says a great deal about how broken the United States' job market and social safety net are. If minimum wage were $15, they could find another job that paid their basic living expenses. If health care weren't left up to your employer, they wouldn't be out of luck while looking for a different job. If there were any alternative, they wouldn't stay in this hellscape.

They stay because this is the best deal they could find. Think about the kind of society that makes that the case.

In general folks working in in-demand areas of tech are pretty out of touch with how hard it is it "just quit" a job. If you're an engineer or data scientist making $200k+ then you very likely have some moderate savings to cushion you between jobs and will have no trouble finding a new job and getting a slight raise. If you make over $200k, you are getting that much because your skills are in demand and employers need to pay that much.

If you're making $30K you certainly don't have any kind of buffer to hold you over between jobs, and don't have skills that are attractive to employers. Even basic things like interviewing are much harder. An engineer or data scientist can pretty much disappear from their desk for an entire day and no one will ask questions, you don't need time off for an interview, and if you do you likely have tons of leave. Someone making 30k most likely has to give notice of vacation many weeks in advance, if they even have leave benefits at all. And because your skills are less in demand your interview success rate is going to be much lower so you'll need even more time to interview.

It's likely very hard for these people to find new jobs even if they hate their current one, they are probably happy just to have income.

You clearly only read the first line.

...of what? It seems to integrate well with both the parent comment and the article.

Given that you are the parent comment, is it so hard to read conversation like this as building on each other rather than an assumption of being relentlessly contrarian? This is needlessly hostile to someone who clearly agrees with you.

I read it as an attempted counter-argument, but I could have misinterpreted.

It also sounds like they are sold on the empty promise that these jobs will get your foot through the door into the lucrative technology industry.

It seems pretty clear that they stayed after any illusion of that fell away.

It's dark, but you can manipulate and abuse people into staying through pretty much anything.

See abusive relationships or this handy article http://www.issendai.com/psychology/sick-systems.html

Maybe it's an illusion, or maybe it isn't. That depends on the individual in question. If somebody's previous work experience is laying tar down on roofs, they may consider the change to an office job to be well worth the added emotional toll.

Due to the low physicality of the work, there are likely people who consider it easy work, despite the psychological impact that seems obvious to you or me.

I had a really short contract at a major tech company. It was a completely different situation, I felt that I was respected and treated well. But it was a very low paid/low skilled job. I didn't see much of any room for growth. The biggest benefit to me was that it got me an interview for an internship with that company when I went to school for CS.

I don't expect these contractors to get much long term benefit from the position. And it sounds like most quit or get fired.

Maybe they think that flagging shit online is easy and pays well to later find themselves with anxiety in the house. This kind of stuff is never easy to pin point

Maybe when they apply. But from the sound of things that idea would vanish in the first week.

many do leave as the turnover rate is extremely high, but with an increasingly underempoyed white collar workforce and rising rent and healthcare and education cost there's a pretty huge pool of reserve labour to churn through.

This is really just a giant meatgrinder for the population outside of the "cognitive elite" class and probably a good sign of things to come for the workplace of the future of half of the population.

That is true, but a common theme among many of the stories is how much more they pay compared to other places.

These people are literally trading dollars for their mental health. That is the choice they are making.

Except those dollars amount to $28,800/year. Hardly enough to live on in the cities where these offices are, yet more than twice minimum wage ($15 full-time, versus $7.25 at what are often inconsistent hours [Edit: in Florida it's $8.46, but I think the point still stands]).

They aren't hustling because they want some extra cash. They're trading their mental health for the basic ability to be a living, eating human being.

Yeah, 28,000 was barely a livable wage in the incredibly low-cost city I used to live in. I can't even imagine trying to live on it in the cost-of-living-helscape that is silicon valley.

Technically the offices are in Phoenix, Austin, and Tampa. Still, those are major U.S. cities.

I'm not following this logic. They can't quit a job that pays double minimum wage unless minimum wage is increased to the amount they are currently getting paid? If minimum wage was $15 there would be some other crummy employer paying more than minimum wage to incentivize people to take a shitty job. Would they be stuck in that one too because the alternatives pay less?

If it was possible to live a normal life on the minimum wage then many would chose to instead of destroying their mental health to be able to pay rent.

Choose to do the bare minimum because we change the system to make it comfortable enough? That’s a good idea.

I’m not necessarily against raising minimum wage. It should probably be reviewed and adjusted more often than it is. But wouldn’t we be better off with a system where people are incentivized to do more than the bare minimum? I don’t think minimum wage should be intended to be the baseline for a comfortable life. It should be high enough that people want to work in the first place rather than take handouts but not so high that it’s an opportunity to stagnate. Frankly, minimum wage is for first time workers and people who have zero marketable job skills or some other issue that prevents them from doing more productive work. Most people can achieve much more with a little work experience.

They aren't necessarily trying to maximize, they're trying to keep their heads above water. If they had alternative option for doing that, they would take it.

> If minimum wage were $15, they could find another job that paid their basic living expenses.

Why is this necessarily true? Couldn't it be the case that minimum wage were $15 but they still couldn't find any other job?

Currently we have a surplus of actual jobs, it's just that a large number of them pay sub-living wages. If those were forced to pay more - even if it meant some of them disappeared - many more people would have the opportunity to climb out of poverty.

Edit: I may have misspoken when I said "surplus". What I was referring to is the very low unemployment rate we currently have, despite large numbers of people - with "jobs" - continuing to live in near-poverty.

> Currently we have a surplus of actual jobs

No, we don't. An actual job is what happens when the willingness to pay for labor connects with the willingness to provide it for pay.

What you are seem to be referring to (additional willingness to purchase labor at prices below that that people are willing to accept) is not a surplus, it is is the normal condition of demand (and a parallel thing exists normally with supply, where there are people willing to provide something beyond what is currently being transacted in the market, but at higher-than-market prices.)

Or we'd see a bump in automation and a reduction in the number of jobs available. When you raise the price of something (labor), corporations will reduce their consumption of it.

If there is a surplus of actual jobs, then they are being forced to pay more already; but they're not, hence the surplus.

> > If minimum wage were $15, they could find another job that paid their basic living expenses.

> Why is this necessarily true?

It costs money to take a surveying class, or buy tools to practice your skills. It also takes having money saved so if there is a crisis, car repairs, etc you don’t have to constantly work overtime to keep your bills paid.

Please reach out to a friend who is making less than a “living wage” in their area and ask them about career development and what they need. I appreciate your question.

> If minimum wage were $15

... then low-skilled candidates would NOT be able to find any job at all, because if low skilled candidate's expertise does not support $15/hour salary, and employers who could, potentially, pay less (e.g. $10/hour) are legally prohibited from employing people at that lower rate -- then there are no other jobs available to that low skilled candidate at all. Which means forced and hopeless unemployment.

Yes, and if we got rid of the minimum wage just think of all the wonderful job opportunities that would abound! Why, if you only pay people $1/hr, you can employ 7x as many people vs the current min. wage! It's simple economics.

I am surprised after reading a lot of comments here (not all), that I have not seen any discussion of Cognizant and their role. I am no fan of Facebook and I believe that they have significant responsibility here, but the contractor is, imo, the party directly responsible.

These people do not work for Facebook, and we don't know the nature of the contract in play. Are they paying per person, or a lump sum for some capacity at some accuracy rate. If Cognizant automated all of this would it be accepted under the contract?

Anyways, I don't want to shift focus away from Facebook so much as wanting to recognize the contracted call mpanies like Cognizant (which is what the whole article is about btw, with some comments referring to Facebook). Accenture and Cognizant really shouldn't escape the scrutiny just for being overshadowed by a bigger name.

Facebook is the entity creating this work, and is the root of the problem (by contracting a 3rd party to perform work that they very well understand the consequences of). The article is suggesting that Facebook should be held accountable for the detriment (trauma) that their work is causing. I think the article even argues that the contractors aren't equipped to deal with a problem of this magnitude/seriousness. Facebook is in the best position to rectify their moral accounts, but we all know that's not going to happen because they are evil and so forth.

It's true that Cognizant has direct power to change things, but ultimately, the buck stops with Facebook, since they seem to be the vast majority of Cognizant's work and thus essentially have direct control of the purse strings. FB has the ability to change how Cognizant treats its workforce, and it's Facebook's choice to take a stand or to wash its hands of it. FB also has indirect say in demanding a certain standard ("98% accuracy target") for a given amount of money ($200M) -- though obviously if FB were to simply pay Cognizant more for the contract, there's no guarantee Cognizant would use that money for better worker pay/benefits (as opposed to giving bigger bonuses to executives, for example).

In the article, one of the contractors says that Cognizant puts up a "dog-and-pony show" whenever FB executives visit. Again, it's ultimately up to FB to decide how much they want to push past the facade.

Why wouldn't the buck actually stop with Cognizant management? FB isn't demanding these horrible labor practices, Cognizant is.

They’re paying rates that more or less require it.

Facebook is? What rate does Facebook pay and how much are these Cognizant employees getting paid?

The assumption is that Facebook is selecting the contracting agencies based on performance or cost. If Cognizant's performance doesn't meet the Facebook's standards they will get dropped, the same thing will happen Cognizant isn't competitive on price.

This downward pressure ends up directly impacting moderators. Cognizant needs to keep payroll costs low so they don't lose the contract, and the contracted accuracy target of 98% seems unrealistic. So moderators end up fearing for their jobs when they don't meet accuracy targets.

Cognizant should be banned from the h-1 program for these type of employee abuses. If they were threatened with something like that, they would actually listen since thats how they make money. There should be some principle like "if we believe you are a scummy employer, you can be banned from being granted h-1 visas".

Cognizant isn't in the headline and unfortunately most people don't actually read things.

If the same article was about Apple and a subcon (not Foxconn) where do you think the limelight will be?

Just a warning - I found even a short description of some of the videos they had to watch fairly disturbing.

I don't think I could do that job for very long - let alone in a badly run, high pressure environment with low wages.

I don't see how an average person could be expected to witness some of the things mentioned in the article. I didn't have time to read the entire article, but do they have counselors on staff or something?

That one paragraph about organs was enough to ruin my day, and it was just text. I'm surprised such a "rash of videos" wasn't in the news somewhere.

From the article:

He sought out the on-site counselor for support, but found him unhelpful.

“He just flat-out told me: ‘I don’t really know how to help you guys,’”

So the counselors were probably an afterthought. I'd even bet the decision was made not as a way to keep the moderators from going insane, but to protect the company's image.

I mean, a counselor can only do so much in this type of situation, since the patient is not in a position to remove the negative stimuli from their lives.

Update June 19th, 10:37AM ET: This article has been updated to reflect the fact that a video that purportedly depicted organ harvesting was determined to be false and misleading.

I know a very well qualified woman who has previously worked with pschological trauma in the army amongst other organisations, who has recently been hired to work at Facebook in Ireland as part of what sounds like a sizeable team. So it seems like Facebook aren't entirely ignorant of their responsibility here, although they do seem massively behind the curve.

> That one paragraph about organs was enough to ruin my day, and it was just text. I'm surprised such a "rash of videos" wasn't in the news somewhere.

I'm left wondering if it was a clip from a horror movie or something. Shitty to see, but not necessarily newsworthy like "live children chopped up for organs" would be.

No need to be left wondering. While I'm not sure these are the videos mentioned, it's not hard to find some just by googling for it. Or maybe don't, I wish I didn't.

I don't doubt you can find all sorts of horrifying video.

I feel like I'd have seen media reports of widely-shared videos featuring live, conscious kids being vivisected in order to harvest their organs.

Isn't the point of moderation/censorship to avoid having these videos widely-shared? These videos exist, they are fairly easy to find on specialized websites. Some are fake, but a lot are genuine. Videos showing executions during wars in Chechnya or in the middle east and such.

It really seems important to know whether the video is real or a horror movie.

Not from a PTSD point of view. If these workers think it’s real, it’ll have the same psychological effect as the real thing.

Looks like the article itself linked the video they mentioned - turns out they were not harvesting any organs in that particular video

>I don't see how an average person could be expected to witness some of the things mentioned in the article.

I'm no fan of Facebook, but in their defense, this is sort of the point.

Well it still is their responsibility. It's like Dr Frankenstein letting the monster lose on the village and then going "hey guys well I sure don't know how to figure this stuff out".

If the government, the military, academia, the police or anyone else would let their inventions lose on the world in the manner the private sector does and leaves it to us to figure out how to clean up the mess we'd call them insane and demand them to be shut down within a week.

It makes me aware of a curious dichotomy about content moderation on Facebook. On one hand, they have publicly committed to stem the flow of "Fake News", whatever that means, on their platform. Now on the other hand, we have this article about the suffering of contractors who's task is to block content that is too real. I guess this places Facebook's platform into a category with theme parks like Disney Land, where they aim at maintaining a precarious balance somewhere between the fake and the real.

Very few people want to see real videos of animals being tortured.

In fact it's their very realness that makes them especially horrifying.

Of course no one wants to see horrifying things. But current events force us to admit that Facebook is a platform for vast quantities of political speech. And if we're going to entertain political speech, we should be grown-ups and face reality; if we don't allow the horror to inform our discussions, if we turn away, then the horror will only grow.

Thanks for that warning. It's appreciated. I had to take a break in the middle of the article before continuing.

I found it very disturbing .. I closed the video within a few seconds of the descriptions starting. Got a knot in my stomach and I feel very angry .. I could not do that job for a day

There's a lot of variance between individuals. Given the choice this sounds like an overall better job than working a busy McDonald's drive through (people are assholes when they're hungry) but I'm not the kind of person that's particularly bothered by graphic violence. My opinion of humanity is dark enough to accommodate it. I'm sure many people would prefer McDonald's though.

Some of the people interviewed were complaining primarily about the bad working conditions (and their complaints are valid as far as I care). I would wager these people are not as bothered by the content (though I'm sure they don't like it) as the ones who's primary complaints are about the content. They could probably do the job with less burnout if the rest of the job was made to suck less (i.e. it wasn't a shitty call center style job with all the accompanying baggage).

Edit: Why am I getting down-voted? Can people legitimately not fathom that some people would not be seriously bothered by seeing this content? People post violent content to Facebook. Shock and gore sites exist because some people actively seek out(!!) the kind of content that these moderators are being exposed to. It stands to reason that the subset of the population that at least finds that content not mentally damaging is substantially larger than the group that seeks it out.

I think perhaps people feel that you are minimizing the horror and repulsiveness of the content by saying that these moderators have a better job than working at a fast food restaurant.

My wife, who has worked in a busy McDonald's for 5 years, says this sort of moderation is far, far worse than anything she had to deal with. And she's had to deal with human waste, violence, and direct verbal abuse from customers.

Your point about making the working environment better is a valid one, but think it is overshadowed by your assertion that the horrific content is acceptable to "many people".

Your views do resonate with some people on Reddit though. I remember saying that I didn't like the pained yelps, limping, and whining of Dogmeat in Fallout 4, and I was attacked and ridiculed for that. I know I wouldn't last more than a minute at this Facebook moderation job, it would scar me for life. I don't think that means I'm "wrong", any more than your views are "right".

>I think perhaps people feel that you are minimizing the horror and repulsiveness of the content by saying that these moderators have a better job than working at a fast food restaurant.

I think that perhaps those people need to realize that not everyone is as easily rattled as them. Some people go to the Holocaust museum and are mortified by the details of it and are sad for a week. Some people go and are like "yeah, people do terrible things sometimes, this is just the worst systemic instance to date" and then go out for beers afterward. Whenever there's an armed conflict there's some people who are F'd up by what they see and there's some people who say "yeah that sucked and I'm glad it's over" and there's people in between. I think it's pretty evident that the ability to cope with violence varies a lot between individuals.

>My wife, who has worked in a busy McDonald's for 5 years, says this sort of moderation is far, far worse than anything she had to deal with. And she's had to deal with human waste, violence, and direct verbal abuse from customers.

I've done that too. I'll take hospital janitor over anything in food service. The general public sucks. The floor doesn't complain when you didn't mop it up just the way it wanted. I'd probably try my hand at a moderating job before I went back to food service. Blood, violence, obscene pornography, etc, etc don't bother me. It's nasty but whatever, some people are terrible so what do you expect. There's other things that bother me but those aren't it.

> your assertion that the horrific content is acceptable to "many people".

There's levels of acceptable and there's a reason these employees are being paid more than those of the call center across the office park. I'm not suggesting that some employees find it acceptable in the abstract. I'm saying they are not so easily mentally harmed by it to consider it not worth the pay.

I see it no different than a physically fit 20yo who finds a ditch digging job to be worth his while because he can handle it whereas the 50yo almost certainly cannot handle it without much greater negative health consequences. If you can hack it for the pay then why not.

>I don't think that means I'm "wrong", any more than your views are "right".

You're not asserting that nobody can do this job and I'm asserting that there exist people who can do this job (or at least people who whom the "shitty call center job" conditions are the primary irritant preventing them from doing this job). That said, the down-vote to reply ration makes me suspect that many people simply do not believe that some other people do not think like them.

The fact that these kind of shock videos are even a thing, kind of disproves your point.¹

There are also people who are habituated with cutting into living bodies (surgeons) but the difference is they are highly paid.

It's a bit unfair to equivocate and call this kind of work a good option for some people, when the vast majority of people who are in these jobs probably will be psychologically harmed by it² and are doing it because they have few other options.

It is relegated to an underclass, like all dangerous and undesirable work.

1: Statistically at least. 2: See also: https://www.researchgate.net/publication/228141419_A_Slaught...

Like your dream life—I bet facebook is hiring.

Would you please stop posting unsubstantive comments to HN, especially ones that cross into being unkind? You've been doing this a lot lately, and we eventually ban that sort of account.


Maybe Facebook should make all of their own employees do 15 minutes of moderation per day - just to share the pain out a bit....

Make the users do a little every so often. Like Slashdot's "hey, go moderate some posts" thing but forced to before they can use the platform anymore. That'd be fun.

Of course the real answer is that this sort of site is a bad idea and shouldn't exist. Wide-open signup and public visibility of content, or ease of sharing it with strangers. Bad combo, don't care how much money it's making them (and other, similar sites).

> Make the users do a little every so often.

Oh, dear god, no - have you ever used Reddit? This is what happens when you outsource moderation to the sorts of users who enjoy moderating other people.

Have you ever used Facebook? It’s much worse and this is specifically in reference to media that are against Facebook’s guidelines (e.g. nudity, gore, etc), not all content as a whole. Users doing some moderating in this context would simply be presenting flagged media randomly to users and asking if it breaks the guidelines.

Very completely different things. Subreddits would be the equivalent of Facebook Pages and they already have moderation tools for those that run the page.

Reddit is a bit different than what was suggested here, since moderators are not a rotating position there and this encourages power trips.

Stack overflow has the users do the moderation and it works well, maybe not perfect but its not bad.

There's not really a shortcut around having employees do it because Facebook has a significant amount of private content which users will not tolerate being shared, standards vary widely (imagine a vegan flagging every picture with leather visible or a post about eating meat as obscene — if you're requiring this, do you just disable their account once you notice it?), and especially because it's the target of coordinated group activity so errors won't be uncorrelated: get a cycle of outrage politics going and you'll have thousands of people around the world who have no easily-observed connection to each other flagging things as either objectionable or safe.

And yet Facebook lawyers recently argued to a judge that there is no expectation of privacy on Facebook.

I’m not defending them but there is a distinction between Facebook using your private data for mining and displaying it to random strangers. Any sort of user-driven moderation policy would need to have a way to handle that or people would stop using the service.

Isn't everything you post on FB available for anyone to browse?

You could have a system where multiple people flagging certain content would be a signal to give it manual review.

I've always expected that a system which could work, albeit with flaws, would be to ratchet up censorship for any individual user who flags content. The more they flag, the less they see. The only 'problem' is that this would leave users fearing that there are others who have not flagged the same content objectionable that they have, and are viewing that content without being actively prevented. They'd be protected themselves, but it would be easy to convince them (if any convincing is even necessary) that 'almost everyone else' is consuming the raw obscene feed and being turned into monsters by it.

This already exists on Facebook.

Just what I want my grandmother to see when she logs into Facebook once a week to check on the grandkids.


Hello user, you have been randomly selected to help moderate.

Does this depict pedophilia? Y/N

[ Image here ]

This image was flagged by other users as depicting horrible gore and death, do you agree Y/N?

[ Image here ]

This post may contain hate speech, do you agree? Y/N

[ Embedded Nazi Screed post ]

Thank you for making Facebook a safer place for everybody!


Make it captcha test: Select all the pictures of animal cruelty. /s

Wow, captcha from hell.

> Make the users do a little every so often.

"Facebook made me look at child porn" is probably a headline Facebook would prefer not to have going around.

Personally, I wouldn't go that far - just that I would think anyone doing this kind of job would need a lot of training, active support and monitoring and probably periodic breaks to do something else and decompress. Probably long term health care coverage as well.

Would that be expensive at the scale required by Facebook? Very.

Only if they dole out libra-bux


Personally I'm fond of zucc-bux

Seen on here the other day (can't remember the poster to credit them): "Douche Mark". A play on Deutsche Mark, for those who are either not European or not old enough to remember that currency, as seemed to be the case with some respondents to the original post.

Slashdot's "hey, go moderate some posts" thing but forced to before they can use the platform anymore. That'd be fun.

Overall not possible. On Slashdot everything is basically public, on Facebook it's largely private/restricted and I'm sure from past discussions that the worst stuff is in private groups or clusters of friends. They could do much more aggressive banning of users (and detecting new accounts designed to circumvent bans) for sharing such content or being in groups focused around such content, but that might hurt their most important metrics.

I agree with your point about the site being a bad idea, so maybe your first point was tongue-in-cheek. But displaying reported child pornography to people outside of your controlled environment? Not a great idea...

Just pricing in an externality. If that's a disgusting and shocking way to deal with this, then it doesn't follow to me that paying a little money to folks to do it makes it anywhere near OK.

So yeah, it was tongue in cheek, with a side of reductio.

No the concern with distributing likely child pornography and snuff films to users is that the whole point of moderation is to STOP distributing that. You want to have it moderated in as controlled an environment as possible. I'm all for not subjecting more people to it than necessary, but it should at least be people who explicitly consent and are trained and provided with the necessary resources.

Don't have to subject anyone to it. Don't have Facebook.

I have never distributed child pornography or snuff films. I'm already not subjecting anyone to it. Communications channels exist. Unless you want to undo that, or enable this behavior, you have to deal with it somehow.

We don't have to have huge, centralized communication sites where anyone can sign up and post stuff and the operator's shielded from liability for what's posted so long as they subject a bunch of humans to it in order to filter it out, even if there's tons and tons of it posted constantly. We only have to have that in order to keep supporting the current ad-tech business models, which isn't a given.

I'm not saying if Sally starts a forum and it gets hacked and someone posts illegal stuff she should be fined or charged with anything. I'm not saying she should even be in trouble if she gives one person who turns out to be a jerkass posting rights and that person posts illegal stuff. But if she opens up public signups, gets a billion or two users, then says "well I guess I'll just have to pay a bunch of people money to look at all this gore porn that's getting posted" the rest of us should go "uh, no, you should instead just stop".

That's a very arbitrary line that's really just based on you making a judgment call for things you like or against things you don't like. You'd probably do well in American politics.

I don't like subjecting a bunch of people to horrible images on an industrial scale, no. And I don't think "it's making us money and we can't think of a better way" is an excuse to keep it up.

[EDIT] and given that second sentence, no, I'd get nowhere in American politics.

So your solution here is that instead of all these people voluntarily seeking other jobs, their department shouldn't have a reason to exist, ergo shutdown Facebook and now all these people and more are involuntarily unemployed. Riight.

All the devs would leave

That's probably true - I wonder whether it would be because of the affront of having to do such menial tasks or the horror of realising that operating a global social network requires somebody to trawl through the filth and risk their mental health?

No, its more like the devs have the necessary marketplace leverage and fallback to either put a quash on that notion or move on to another less ridiculous job. Don't think for a minute devs wouldn't be treated the same (and in some places ARE) if the corporate c-levels could get away with it without bleeding talent that is expensive to re-acquire. I am not, by any means, a Workers of the World Unite kinda person, but group negotiation and gasp unionization is becoming more and more necessary - if ONLY to get the beancounters to consider labor in their souless cost calculations when making decisions that lead to the conditions in the article.

Historically, unions are a tried and true way for improving working conditions and average compensation. Of course, that's also the reason why they are so vehemently opposed...

Very true, but in honesty, they can be almost or more trouble than they solve when they grow so big and bureaucratic as to no longer represent or even actively harm the workers they purport to support - it's these cases a lot of union detractors latch onto in their arguments against unionization, with some degree of fairness. I speak as the son, grandson, and great-grandson of people who were VERY active in their respective unions. In particular my great-grandfather helped to unionize electrical workers in refineries in Texas, and my father was a union steward. They are full of stories of the headaches the union bureaucracy caused.

That being said - those cases are really rare and in terms of harm, the naked calculating exploitation that corporations flirt with is WAY worse imo than the harm a too-big-for-its-britches union causes.

It is because it is a waste of their time and a bad approach and everyone knows it fundamentally on some level - however nice the eglatarian ideal would be in other ways.

It would be like expecting doctors to clean bedpans. Now it is a neccessary task and those who do so should receive appropriate respect even if it is just "Thanks glad I don't have to do it." but anyone off the street and willing to do the dirty job could do it without a doctor's training which they spent over half a decade on to be /entry level/. Plus incredibly inefficient for effectively paying say $100/hr to a janitor when they could be saving lives instead.

Now asking them to do it in a neccessary and justifiable situation (say posting in an aid camp cut off by weather or in space thus making sending any extra bodies expensive) is one thing and they would be in the wrong then for refusing out of pride.

Absent that neccessity it shows poor sense of their actual value until medical degrees and skills become just as common as the skills to clean bedpans.

Yes, I should perhaps clarify - I mentioned developers to point out that Facebook has the resources to provide excellent working conditions.

If I can adapt (and perhaps torture) your analogy, I'd say it's like Facebook currently has doctors who save lives and command high salaries, and janitors who change bedpans but don't have access to hot water and latex gloves. So inevitably, the janitors catch a disease (hospitals are filthy, after all! This is foreseeable!) and are no longer able to work, at which point they are replaced.

Given that the hospital can afford to pay the doctors, we might ask if they could splash out for latex gloves and soap for the janitors, too.

The different is there are plenty of people who are not affected negatively with those "horrible" content. Me for example, if not for the low salary I would do it.

Content moderation job is not for everyone.

Your example seems useless. There would be minimum working condition requirement enforced by labor department. So it would be illegal that janitors be forced to work in filthy conditions without necessary gears. Similarly FB would be fulfilling the working conditions set by labor department for desk workers.

People are essentially asking if FB can increase salary/ perks by x amount because internet commenters are somehow not feeling good about current scenario.

> There would be minimum working condition requirement enforced by labor department.

Yeah, this is basically what I'm saying. :) Labour codes are developed over time. In the beginning, it's the wild west. Want to be a janitor and not use any personal protective equipment? Go right ahead! After workers keep getting sick, the government begins to legislate requirements around health and safety.

Awareness of mental illness is a relatively new thing. We'll probably see some developments in this area if the big tech companies continue to outsource moderation at scale. https://www.theguardian.com/news/2017/may/25/facebook-modera... is a nice article that describes accommodations that other companies provide to moderators. They include, as an example, monthly psychologist visits which continue after the person stops working for the organization. They also include training for the person's family and social support groups.

Executives would also not put up with it - obviously, to the point that I'm not sure whether you meant to include them or not when you said "employees".

Not wanting drudge work is common to nearly all people who have options. Why should devs get all the hate?

...but there must be a downside, no?

(stolen from Scott Adams.)

Probably not the ones—if any—that lurk 4chan and other extreme imageboards, having been desensitized to such content or probably even get something off of it.

I actually wonder now how much of their workforce would fit in that demographic.

anyone thats been lurking/posting on chans for a long time also know neurotypicals couldnt deal with the content

just because some people become desensitized doesnt mean the obscene content wont damage the mental health of the entire set of people

I was just thinking that. 4chan janitors are volunteers. Of course, the payoff is helping 4chan exist, not helping Facebook make a few billion more, so the incentive is still vastly different. Still, that might be the right recruitment pool.

I don’t think so. They’d just beat off to the orphan vivisection videos all day long.

There is a lot more to that website than /b/. The janitors on /out/ do a very good job of keeping the board clean.

that would be great, then they could just the whole thing down and the world would instantly improve by 10%

Have you ever worked with mechanical turk or anything of this sort? You pretty much have to do at least 15 minutes of manual moderation per day. Training "human workers", setting up rating templates, finding human mental "bugs", i.e. things that raters/moderators get consistently wrong - it takes quite a lot of time.

Granted, not everyone does that, someone nibbling at the frontend or sitting deep in some distributed DB generally does not care much about data quality.

Make the executives do this.

It’s common practice for them to do so. Not just at Facebook, but YouTube and other platforms.

It’s management 101

Oversight over key quality metrics (including understanding what exactly raters/moderators do) is a VP or even SVP level job. It's critical if your entire org's goal is to deliver ever improving data quality at reasonable latency and reasonable reliability (example: Google Search).

Not to mention that, in data driven orgs, the launch/no launch debates are focused on those metrics. So if you want to call the shots you kind of need an in-depth understanding.

You and the GP claim this, yet if these execs have truly been exposed to the worst that gets posted to their platforms, why has there been no appreciable change made? Unless they're so hardened to it they can't emphasize with the horrors that they're asking people to face.

There's no alternative. The only other option (for Facebook and content moderation) is to shut down the shop & return equity to investors, or do nothing and wait for the platform to blow up. And even then someone will make Facebook v2.

Just the economic incentive to cut moderators loose and use algorithms would be enough to do so if it were possible. After all you can buy a decent amount of compute power for 30+k/yr, that will certainly deliver more than the 100 or so QPD[ay] form a content moderator.

PS. I'm sure that most of the work is being done by algorithms anyhow.

That’s all or nothing thinking. The immediate alternative is to improve working conditions for moderators. Or impose more stringent bans and punishments on those who post that sort of material. Those are just for starters.

Look I wish everyone had a nice life, I really do, but markets set salaries while governments sometimes intervene (e.g. by setting a minimum wage).

Don't know about the banning situation, that'd require insider Facebook knowledge. I'm sure they ban some people.

The Market Police don’t haul you off if you pay something different than the “market wage.” Markets set a floor on wages by making it so you won’t get enough workers if you pay less. But companies can easily pay more than the floor if they want to. Obviously they don’t want to, because they love money, but it’s ridiculous to frame this as some iron law of markets.


This makes complete sense. Why pigeon hole employees into, well, pigeon holes? Technology, Operations, Marketing, more. Each understanding more about each others' roles, challenges, solutions. A knowledge organisation is based on knowledge. I'd favour a scale that could be chosen, 5-50% outside of core role.

[Clarity: Am not a Facebook employee.]

I suspect it would come with a "manager's discretion" clause that would ultimately effectively eliminate it, like 20% time at Google.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact